Reductionism and intelligence
Assuming in this argument that free will and conciousness actually exist, at what point do they begin to control our bodies? The way I see it, there are three possibilities. 1) There is an as-yet unidentified ingredient of matter which is responsible for free will 2) Free will is a fundamental property of matter 3) All the constituents of matter are "dead", but are collectively granted a mind by virtue of their organisation One way of approaching this is to ask "at what point does a baby begin to think?". 1) can't really be argued about, it's shrouded in ignorance, but we seem fairly confident that there's little, if anything, factual missing from chemistry. 2) will be lept on my advocates of superposition of quantum states etc., the result of an observation appears to be essentially random (as do the digits of pi, although we know they are anything but), but one could believe a particle settles to a given state because it "wants" to, and that minds result from the constituant parts forming a consensus. 3) is a very similar line of thought to 2), except it applies to the intangible organisation of matter, even though all the parts may be "dead". Here, Platonism plays a big role. 3) seems to be the stance of followers of strong AI, but always seemed suspicious to me. They say if you could build a mechanical replica of the functions of the human brain, it too would think. Also that this replica could be written down and would aquire thought by virtue of being read, presumably by anyone, although a Turing machine is typical. This seems silly to me, it suggests the encoding is essentially irrelevant (e.g. a variety of Turing-complete machines with totally different instruction sets), by this argument one could devise a bijection between any two systems of the same complexity e.g. neurons in the brain with a subset of stars in the sky. Thus, everyones mind exists everywhere. And if the computer program was stored, but not executed, could the way the wind blows be considered an encoding of some mind, which was reading the program? Reality subject to point-of-view I can cope with, reality subject to definition seems silly. 2) is philosophically appealing to me. It allows one a great degree of freedom in speculation and belief. It may be that the couciousness and free will are liquidlike, flowing between systems, favouring complex ones because that is where the essence of mind is most needed, and presumably being a part of a more important system is somehow nicer. In this case we don't permanently own our minds. It also doesn't exclude ESP etc. It also means that even the simplest systems like lightswitches possess a trivial amount of intelligence, only capable of manifesting itself in negligible ways. I find it quite amusing that people are trying to make computers think, when they may be already thinking: "Help! I'm stuck in a computer!". The humane thing to do would be smash the microchip, and release the free will to spread into other things. Maybe turning it off would be just as good. Maybe simple things such as microchips are home to lazy minds. This is only one interpretation, of course.
spraff.net: don't laugh, I'm still just starting...
If there would be such a thing as free will (which i personally am very sceptic to) i definitely would go for 3. But due to my scepticism i cant argue baout it :)
--Spencer"All in accordance with the prophecy..."
There is no "free will", didn't you get the memo?
From,
Nice coder
From,
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
No there is, but you have to choose the blue pill... or was it the red one.... ugh.... decisions.
Quote:
Original post by Anonymous Poster
what evidence is there that you think?
Oh for heaven's sake, it's an assumption, given in the first line
spraff.net: don't laugh, I'm still just starting...
I think that the OPs original discussion is suggesting that "free will" is something we have over and above a body, in the same sense that some people think that the mind is something over and above the functioning of the brain. Sure, we don't yet fully understand the brain, however, most of the functionality of the body and even some of the decision making of the mind can be attributed to known states and functional relationships of elements of the brain.
Can you offer any evidence that, or even a good hypothesis as to why, 'free will' would be able to 'move between systems'? What you're basically saying here is that you believe we have a soul (and no, I'm not suggesting that a soul necessarily be of a spiritual/religious nature. I'm suggesting only that a soul be an ethereal aspect of a thing seperate from the physical aspects, which would account for your definition of 'free will'). It also appears that by extension, you're saying that a light switch has a very small, simplistic soul.
Personally, I see no reason to posit souls or other intangible, ethereal constructs to explain 'free will'. Indeed, I don't think we even need to have free will to be intelligent, rational beings. 'Rational' and 'Irrational' are subjective concepts based on values. There is nothing to suggest that we ever make a decision contrary to our values or desires; simply that we might not always be aware of what those values or desires are or the extent to which they relate to the decision context.
Here's a little thought experiment to consider: Suppose that one day we learn how to build a nano-device that can replicate the input-output functionality of any sort of cell in the human body. Furthermore, assume that the device can devour a cell in your body and replicate its behaviour. So, now I could inject some of these devices into your body and they would replace some of your organic cells, but because they perfectly mimic the input-output funtionality of the cell they replaced, you would be functionally no different than if you were made entirely of organic cells.
Now, consider that I keep replacing your organic cells with nano-cells. If I were to continue until all of your organic cells had been replaced, you would be made entirely of nano-cells and would not be the same physical matter you were before I started. Are you still you? Functionally, you would be exactly the same. Would you be 'intelligent'? There's no reason to believe you wouldn't be. If you had a soul, or some other ethereal form of free will, would it still be connected to your new body? If so, then it can only be connected to your body through functional relationships of the matter making up your body. In which case, Occam's Razor would suggest that your 'soul' and your 'free will' are only perceptions of the functionality of your body, rather than things connected to it. For all intents and purposes you would be you, just with a body made of slightly different matter.
Cheers,
Timkin
Can you offer any evidence that, or even a good hypothesis as to why, 'free will' would be able to 'move between systems'? What you're basically saying here is that you believe we have a soul (and no, I'm not suggesting that a soul necessarily be of a spiritual/religious nature. I'm suggesting only that a soul be an ethereal aspect of a thing seperate from the physical aspects, which would account for your definition of 'free will'). It also appears that by extension, you're saying that a light switch has a very small, simplistic soul.
Personally, I see no reason to posit souls or other intangible, ethereal constructs to explain 'free will'. Indeed, I don't think we even need to have free will to be intelligent, rational beings. 'Rational' and 'Irrational' are subjective concepts based on values. There is nothing to suggest that we ever make a decision contrary to our values or desires; simply that we might not always be aware of what those values or desires are or the extent to which they relate to the decision context.
Here's a little thought experiment to consider: Suppose that one day we learn how to build a nano-device that can replicate the input-output functionality of any sort of cell in the human body. Furthermore, assume that the device can devour a cell in your body and replicate its behaviour. So, now I could inject some of these devices into your body and they would replace some of your organic cells, but because they perfectly mimic the input-output funtionality of the cell they replaced, you would be functionally no different than if you were made entirely of organic cells.
Now, consider that I keep replacing your organic cells with nano-cells. If I were to continue until all of your organic cells had been replaced, you would be made entirely of nano-cells and would not be the same physical matter you were before I started. Are you still you? Functionally, you would be exactly the same. Would you be 'intelligent'? There's no reason to believe you wouldn't be. If you had a soul, or some other ethereal form of free will, would it still be connected to your new body? If so, then it can only be connected to your body through functional relationships of the matter making up your body. In which case, Occam's Razor would suggest that your 'soul' and your 'free will' are only perceptions of the functionality of your body, rather than things connected to it. For all intents and purposes you would be you, just with a body made of slightly different matter.
Cheers,
Timkin
Quote:
Original post by walkingcarcass
Assuming in this argument that free will and conciousness actually exist, at what point do they begin to control our bodies?
The way I see it, there are three possibilities.
1) There is an as-yet unidentified ingredient of matter which is responsible for free will
This is definitely an assumption of materialist objectivism. However, if we subscribe to this ontological viewpoint, then we must live with Cartesian duality. However, if mind is an epiphenomenon of matter (the brain), then we are deterministic machines and hence can not have free will. To assume that something is made up of matter and yet can have freewill, imagine for a second a particle, subatomic or not, being able to influence itself.
Quote:
2) Free will is a fundamental property of matter
This is a viewpoint more espoused by noumenonalists or other ontologies which hold that matter does not hold primacy. Reductionism does not hold well in non- materialist metaphysical viewpoints because you can not simply observe the parts, and then sum them up to deduce the whole.
Quote:
3) All the constituents of matter are "dead", but are collectively granted a mind by virtue of their organisation
This is a belief that holds that mind or consciousness is nothing more than an emergent epiphenomenon of the interactions going on in the brain. What we perceive as qualia (subjective mental states) are nothing more than various electric and chemical responses in the brain. These processes are still deterministic, yet because of the untold number of variables, chaos theory will hold and our actions will be unpredictable (though still determined). Again, this would mean that we have no free will, only the appearance or illusion of it.
Quote:Piaget and others have done some interesting child psych development, but unfortunately, babies can't talk to us :) I've often always imagined what it must have been like for that first proto homo-sapiens to have exhibited his sapience...his self-awareness and ability to understand. Was it something gradual or was it like a light bulb going off? Think about our own childhood...we don't have this wham sensation where we realize that we are independent. In fact, in most child studies, babies up to the first few months of age don't have a sense of self yet. This is why a baby cries when it is taken away from its mother.
One way of approaching this is to ask "at what point does a baby begin to think?".
Quote:
2) will be lept on my advocates of superposition of quantum states etc., the result of an observation appears to be essentially random (as do the digits of pi, although we know they are anything but), but one could believe a particle settles to a given state because it "wants" to, and that minds result from the constituant parts forming a consensus.
While I've been studying Buddhist and Hindu thought, what has leapt out at me is the notion that there is no self, and therefore there is no freewill. Or rather, there is choice, however, that choice doesn't belong to an "I". Rather, there is only one thing, consciousness (God, Brahman, void, or what have you) that has freewill. Quantum mechanics breaks down at the interppretation level into various camps. We don't really know what or if consciousness plays in decision making, nor for that matter do we know what role the unconscious plays on quantum objects.
Quote:
3) is a very similar line of thought to 2), except it applies to the intangible organisation of matter, even though all the parts may be "dead". Here, Platonism plays a big role.
This idea belies the idea of reductionism, that all things are seperate or at the very least can be taken apart or can be independent of one another. These seperate things can then be put together in a fashion that provides new functionality. This functionality can then become (as is argued) freewill, consciousness and intelligence. However, there are alternative viewpoints to this.
I for one am a monistic idealist. Instead of perceiving that the mind stems from the brain, idealists see it as the other way around. From the mind comes matter. In other words, consciousness is the basic "stuff" of reality, and matter is a secondary creation of consciousness. That explains the idealist part, the monistic part means that there is only one thing. The idea that there are seperate objects is something of an illusion. This idea is not the same as everything being causally linked and therefore everything is in effect affected by everything else. But rather, there is only one thing...again, consciousness, God, Brahman or whatever you want to call it.
If you can, think of Plato's cave allegory. Reality wasn't the shadow's dancing on the cave walls. Nor were they the objects that cast the shadows. Reality was the light itself, and the shadows were merely manifestations cast by concepts within the light. Similar ideas are expressed in Buddhism's "form is void, void is form" dharma.
Quote:
3) seems to be the stance of followers of strong AI, but always seemed suspicious to me. They say if you could build a mechanical replica of the functions of the human brain, it too would think. Also that this replica could be written down and would aquire thought by virtue of being read, presumably by anyone, although a Turing machine is typical. This seems silly to me, it suggests the encoding is essentially irrelevant (e.g. a variety of Turing-complete machines with totally different instruction sets), by this argument one could devise a bijection between any two systems of the same complexity e.g. neurons in the brain with a subset of stars in the sky. Thus, everyones mind exists everywhere. And if the computer program was stored, but not executed, could the way the wind blows be considered an encoding of some mind, which was reading the program? Reality subject to point-of-view I can cope with, reality subject to definition seems silly.
Why does that seem silly? Another way to look at this is what Protagoras said, "Man is the measure of all things". In effect, our measurement, our definition of things is what makes them. Solipsism often seems to be something of a pariah or a ridiculed superstitious belief, and yet one can't prove it wrong. Our scientific method has enchanted us with the idea of reductionism, causality, and objectivity. The idea of pure subjectivity seems arrogant (I'm God here), incorrect (why can't I just wish myself rich?), and against our common everyday experiences (why can't I read your mind if you're just really a part of my mind?). And yet, this is a selfish view of Solipsism which says that you are the center of subjectivity. But what if "you" aren't?
I've analogized what might be a possibility for mind as something like this:
Mind, consciousness and freewill (all one thing) is a great big mainframe. All we sentient and sapient beings are terminals connecting to the mainframe. All the processing and all the decisionmaking is done by the mainframe. Only the mainframe has freewill. We're really just dumb terminals that have a small amount of data access. What's worse, we don't realize we're dumb terminals, and we think we're doing all the processing and choosing ourselves.
The analogy isn't perfect, but it gets across the main idea.
Quote:
2) is philosophically appealing to me. It allows one a great degree of freedom in speculation and belief. It may be that the couciousness and free will are liquidlike, flowing between systems, favouring complex ones because that is where the essence of mind is most needed, and presumably being a part of a more important system is somehow nicer. In this case we don't permanently own our minds. It also doesn't exclude ESP etc. It also means that even the simplest systems like lightswitches possess a trivial amount of intelligence, only capable of manifesting itself in negligible ways. I find it quite amusing that people are trying to make computers think, when they may be already thinking: "Help! I'm stuck in a computer!". The humane thing to do would be smash the microchip, and release the free will to spread into other things. Maybe turning it off would be just as good. Maybe simple things such as microchips are home to lazy minds. This is only one interpretation, of course.
I agree. Takuan, a famous Zen philosopher once advised Yagyu Munenore (whom some consider to have been a better swordsman that Miyamoto Musashi since the latter turned down an invitation to duel by the former's father), "the mind should be nowhere in particular". And not only should it not, perhaps it is not anywhere in particular. We attach our minds to our physical bodies, but is it really? Who knows, but I think that in many ways, we've let the material objectivist ontology of the scientific methodology reign for far too long. I really think that at least one semester of philosophy should be mandatory for all science majors in order to expose them to different metaphysical viewpoints that could underpin reality.
The world has achieved brilliance without wisdom, power without conscience. Ours is a world of nuclear giants and ethical infants. We know more about war than we know about peace, more about killing than we know about living. We have grasped the mystery of the atom and rejected the Sermon on the Mount." - General Omar Bradley
If the individual particles of our universe are of a free will, we have no method of perceiving this fact, or conversing with them on the matter. Thus, the decisions they may make may as well be random. This brings about the convergence of your second and third possibilities, as there is no perceivable difference to the human mind, which is all we have to study the universe.
As to the statements about reality being subject to definition, this is a major flaw in your line of thinking. Reality is a product of our imaginations, no matter what anyone things. We interact not on external sensory, but on internal representations that may be influenced heavily by external sensory, thus our definitions of the components of reality are of equal or greater importance than the driving forces behind those components.
When we try to understand things like what it is to think and to feel and to be self-aware, it is natural to debate on the definitions of the terms used in the discussion. This is because of two reasons: the ambiguality of human languages and the classifications of realities components according to our own perceptions, differing from individual to individual (and, species to species?). Thus, without a proper and solid definition of the description we have of our universe, we can not properly analyze it. Pertaining to intelligence, this means that what it means to think and feel is dependant largely, if not solely, on our own definitions and understanding on what it means to think and feel and to be self-aware.
Thus, as with any problem, we must break out questions into their primal pieces and recombine them in a more simple matter. Are we asking what it would take to make a machine think, or are we asking what it would take to make a machine perform the same functions we perceive as thinking in a human being, or even in a less organism with a sufficiently complex neural system? There is much debate between the intelligence of artificial intelligence and the immitation of intelligence by artificial intelligence. However, if we consider that intelligence is not in the implementation (being neurons in our case and software in a computer's), but in the resulting relationship between the being and the being's enviroment (humans and the universe, AI and data, robots and the universe), we find a very interesting conclusion: immitation is no less than being truly intelligent *.
* Here, I use "truly intelligent" to mean the means of intelligence of a human being, or other living creature, as opposed to that of an artificial intelligence, in those cases where one thinks it important to make any distinction between the two.
Of course, if your Possibility #1 is true, it may be safe to assume the architecture of the universe and this Will Element to be such that the Will affect matter only in situations and manners where it could not be perceived by any intelligence derived from a collection of Will-driven particles. The affects upon the matter would be sparse and subtle, but numerous, as to combine in a larger scale to generate the forces desired. If this is all true, no debate will ever determine the true nature of the universe, although any speculation we make considering Will to not exist as an element may as well be true, as still there is a reality both outside and inside, and it is only the second we truly live in.
Irregaurdless of the reasons behind our conciousness, immitation indistinguisable from otherwise is as real as any. Immitation may have been the key design of our own race by the Unknown Creator(s), and so who are we to place judgement upon those we may likewise create? This, of course, brings questions into play on the nature of gods and their own intelligence and how it relates to ours.
In the end, its nothing but another discussion of grammer and sytax.
As to the statements about reality being subject to definition, this is a major flaw in your line of thinking. Reality is a product of our imaginations, no matter what anyone things. We interact not on external sensory, but on internal representations that may be influenced heavily by external sensory, thus our definitions of the components of reality are of equal or greater importance than the driving forces behind those components.
When we try to understand things like what it is to think and to feel and to be self-aware, it is natural to debate on the definitions of the terms used in the discussion. This is because of two reasons: the ambiguality of human languages and the classifications of realities components according to our own perceptions, differing from individual to individual (and, species to species?). Thus, without a proper and solid definition of the description we have of our universe, we can not properly analyze it. Pertaining to intelligence, this means that what it means to think and feel is dependant largely, if not solely, on our own definitions and understanding on what it means to think and feel and to be self-aware.
Thus, as with any problem, we must break out questions into their primal pieces and recombine them in a more simple matter. Are we asking what it would take to make a machine think, or are we asking what it would take to make a machine perform the same functions we perceive as thinking in a human being, or even in a less organism with a sufficiently complex neural system? There is much debate between the intelligence of artificial intelligence and the immitation of intelligence by artificial intelligence. However, if we consider that intelligence is not in the implementation (being neurons in our case and software in a computer's), but in the resulting relationship between the being and the being's enviroment (humans and the universe, AI and data, robots and the universe), we find a very interesting conclusion: immitation is no less than being truly intelligent *.
* Here, I use "truly intelligent" to mean the means of intelligence of a human being, or other living creature, as opposed to that of an artificial intelligence, in those cases where one thinks it important to make any distinction between the two.
Of course, if your Possibility #1 is true, it may be safe to assume the architecture of the universe and this Will Element to be such that the Will affect matter only in situations and manners where it could not be perceived by any intelligence derived from a collection of Will-driven particles. The affects upon the matter would be sparse and subtle, but numerous, as to combine in a larger scale to generate the forces desired. If this is all true, no debate will ever determine the true nature of the universe, although any speculation we make considering Will to not exist as an element may as well be true, as still there is a reality both outside and inside, and it is only the second we truly live in.
Irregaurdless of the reasons behind our conciousness, immitation indistinguisable from otherwise is as real as any. Immitation may have been the key design of our own race by the Unknown Creator(s), and so who are we to place judgement upon those we may likewise create? This, of course, brings questions into play on the nature of gods and their own intelligence and how it relates to ours.
In the end, its nothing but another discussion of grammer and sytax.
(http://www.ironfroggy.com/)(http://www.ironfroggy.com/pinch)
Quote:
Original post by Timkin
Here's a little thought experiment to consider: Suppose that one day we learn how to build a nano-device that can replicate the input-output functionality of any sort of cell in the human body. Furthermore, assume that the device can devour a cell in your body and replicate its behaviour. So, now I could inject some of these devices into your body and they would replace some of your organic cells, but because they perfectly mimic the input-output funtionality of the cell they replaced, you would be functionally no different than if you were made entirely of organic cells.
Now, consider that I keep replacing your organic cells with nano-cells. If I were to continue until all of your organic cells had been replaced, you would be made entirely of nano-cells and would not be the same physical matter you were before I started. Are you still you? Functionally, you would be exactly the same. Would you be 'intelligent'? There's no reason to believe you wouldn't be. If you had a soul, or some other ethereal form of free will, would it still be connected to your new body? If so, then it can only be connected to your body through functional relationships of the matter making up your body. In which case, Occam's Razor would suggest that your 'soul' and your 'free will' are only perceptions of the functionality of your body, rather than things connected to it. For all intents and purposes you would be you, just with a body made of slightly different matter.
Cheers,
Timkin
Your (your?) experiment is one of the best things I've read lately. Remembers me the William Gibson's "Flat-Line construct", which was the "image" of the mind of an old-dead hacker...
I've recently read "The Light of Other Days" (Arthur C. Clarke), which also touches the subject.
And all an everything brings me back to the following line from BladeRunner:
"I've seen things you people wouldn't believe.
Attack ships on fire off the shoulder of Orion.
I watched C-beams glitter in the dark near the Tannhauser gate.
All those moments will be lost in time, like tears in rain.
Time to die."
[size="2"]I like the Walrus best.
Quote:
Original post by TimkinPersonally, I see no reason to posit souls or other intangible, ethereal constructs to explain 'free will'. Indeed, I don't think we even need to have free will to be intelligent, rational beings. 'Rational' and 'Irrational' are subjective concepts based on values. There is nothing to suggest that we ever make a decision contrary to our values or desires; simply that we might not always be aware of what those values or desires are or the extent to which they relate to the decision context.
I would say that intelligence is only a meaningful idea in the context of making choices, you may say which choice we make depends on our desires and this can then be deterministic, I would say being able to prioritise is the essence of free will. You may argue a machine can prioritise selfishly, but what if my desire is to act against my desires? That would be a paradox in a deterministic world, but it's no obstacle to free will. Unfortunately it is undemonstratable, I could cut my own hand off (which I don't want to do) but doing so could prove nothing beyond my desire's to win an argument greater than my desire for self-preservation.
Your gradual reconstruction experiment is a great demonstration of the liquidity of conciousness, although the construction of suitible artificial replacements for cells would have to accomodate nondeterminism (in my line of thinking). And yes, if this were possible, building the result from scratch without converting a person would be the creation of a mind. Does this mean strong A.I. is possible? Not unless a nondeterministic computer can be built whose possible actions can either be harnessed by a mind, or give rise to a mind.
Quote:
Quote:
Reality subject to point-of-view I can cope with, reality subject to definition seems silly.
Why does that seem silly? Another way to look at this is what Protagoras said, "Man is the measure of all things". In effect, our measurement, our definition of things is what makes them.
Apart from being a chicken-and-egg problem, this seems silly because if we alter our definitions, the world doesn't change either objectively, or how it appears to us. Our understanding of the world may change, but show me evidence our sensations alter. "Man is a measure of all things", as I understand it, is simply a wry truism: there is no measurement without man! Can there be man without measurement?
As an interesting side-note, I read somewhere that white blood cells can now be considered a part of the nervous system. Where does it end?
I wonder, as babies grow up, to they start out a TOTAL blank slate, where thought is concerned. Not only do they move their limbs randomly until they learn the neural patterns which give useful movement, but they seem genuinely suprised that actions are repeatable. My baby cousin recently spend a happy half hour putting sand down a crack in the floor. Do we learn modus ponens, or is it a priori?
spraff.net: don't laugh, I'm still just starting...
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement