Formalizing Thought
I'm curious. Has anyone ever attempted to make a formal definition of thoughts, in a mathematical way (as would be done in computer science), in the hope of programming a thinking system. Perhaps not someone in the area of mathematics or computer science, but perhaps in psychology, in the area of cognitive science. If someone was to attempt to program such a thinking program, some kind of formal structure would need to be created. Perhaps something including goals, knowledge and thoughts.
There have been many attempts to formalize logic, but according to Gödel's Incompleteness Theorem demonstrates that there are very real bounds on such things (which all applies to 'thought' if you include the relevant math as something that could be 'thought about').
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Hubert L. Dreyfus Interview: Artificial Intelligence
Quote:
...
The people in the AI lab, with their "mental representations," had taken over Descartes and Hume and Kant, who said concepts were rules, and so forth. And far from teaching us how it should be done, they had taken over what we had just recently learned in philosophy, which was the wrong way to do it. The irony is that 1957, when AI, artificial intelligence, was named by John McCarthy, was the very year that Wittgenstein's philosophical investigations came out against mental representations, and Heidegger already in 1927 -- that's Being in Time -- wrote a whole book against mental representations. So, they had inherited a lemon. They had taken over a loser philosophy. If they had known philosophy, they could've predicted, like me, that it was a research program. They took Cartesian modern philosophy and turned it into a research program, and anybody who knew enough philosophy could've predicted it was going to fail. But nobody else paid any attention. That's why I got this prize. I saw what they did and I predicted it, and that's the end of them.
...
"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
Quote:
Original post by Extrarius
There have been many attempts to formalize logic, but according to Gödel's Incompleteness Theorem demonstrates that there are very real bounds on such things (which all applies to 'thought' if you include the relevant math as something that could be 'thought about').
Logic is formalized. Goedel's theorem has more to do with mathematical consistency and completeness and is strictly tied to axiomatic formal systems. But even then, thinking of ZFC, the bounds are almost invisible.
Quote:
Original post by LessBread
Hubert L. Dreyfus Interview: Artificial IntelligenceQuote:
...
The people in the AI lab, with their "mental representations," had taken over Descartes and Hume and Kant, who said concepts were rules, and so forth. And far from teaching us how it should be done, they had taken over what we had just recently learned in philosophy, which was the wrong way to do it. The irony is that 1957, when AI, artificial intelligence, was named by John McCarthy, was the very year that Wittgenstein's philosophical investigations came out against mental representations, and Heidegger already in 1927 -- that's Being in Time -- wrote a whole book against mental representations. So, they had inherited a lemon. They had taken over a loser philosophy. If they had known philosophy, they could've predicted, like me, that it was a research program. They took Cartesian modern philosophy and turned it into a research program, and anybody who knew enough philosophy could've predicted it was going to fail. But nobody else paid any attention. That's why I got this prize. I saw what they did and I predicted it, and that's the end of them.
...
Great quote. [grin] But I don't think your link links what you think it links.
Quote:
Original post by Sneftel Quote:
Original post by LessBread
Hubert L. Dreyfus Interview: Artificial IntelligenceQuote:
...
The people in the AI lab, with their "mental representations," had taken over Descartes and Hume and Kant, who said concepts were rules, and so forth. And far from teaching us how it should be done, they had taken over what we had just recently learned in philosophy, which was the wrong way to do it. The irony is that 1957, when AI, artificial intelligence, was named by John McCarthy, was the very year that Wittgenstein's philosophical investigations came out against mental representations, and Heidegger already in 1927 -- that's Being in Time -- wrote a whole book against mental representations. So, they had inherited a lemon. They had taken over a loser philosophy. If they had known philosophy, they could've predicted, like me, that it was a research program. They took Cartesian modern philosophy and turned it into a research program, and anybody who knew enough philosophy could've predicted it was going to fail. But nobody else paid any attention. That's why I got this prize. I saw what they did and I predicted it, and that's the end of them.
...
Great quote. [grin] But I don't think your link links what you think it links.
Maybe this? I disagree with a fair amount of what he says.
It does strike me as a bit Searle-esque. But I think it captures well the arrogance of early AI researchers who completely ignored the implications of philosophy on AI (as opposed to the implications of AI on philosophy).
Quote:
Original post by Sneftel
It does strike me as a bit Searle-esque. But I think it captures well the arrogance of early AI researchers who completely ignored the implications of philosophy on AI (as opposed to the implications of AI on philosophy).
Yeah but he persists in his attack of AI reaserchers (till this day) and holds many far flung views and thinks to conclude on philosophy alone. See this article by John McCarthy on the topic. I dont agree with all of what he says either. They both go too far in their claims.
Yes, I got the link wrong and yes that was the link I intended. The John McCarthy link is interesting, but it doesn't surprise me that he would seek to refute Dreyfuss - considering that their debate goes back to 1957. Did you read the entire interview?
I've read "Being and Time" and "The Phenomenology of Perception" as well as a few books written by the extropians and to me Dreyfuss' criticisms are largely on the mark. However, it seems to me that rather than trying to craft a model of the world and a model of the body, more could be gained by conceptualizing what the world of a computer would be like and seek to craft an AI from that perspective. And from there attempt to move forward adapting that rudimentary AI into one capable of relating to the real world. Heideggereans use the term "world" to mean something different from the everyday sense of the word. It doesn't mean the earth, it means something more subjective - the totality of things that have meaning for the individual - in emotional as well as physical space (and I really don't do the concept justice with that explanation). From this perspective what I suggest above is more along the lines of saying that the AI field would make more ground beginning with a conception of todays computers as something like bacteria or simple life forms and building up from there. It may not be that computers are disembodied, only that we haven't yet figured out how to conceptualize the kinds of bodies they have and the kinds of worlds those bodies relate to. Perhaps there has been movement in this area and I'm not aware of it, but it seems far removed from the extropian visions that I'm familiar with.
As an interesting aside regarding mirror neurons, this essay by a leading neuroscientist discusses the implications of what their discovery means: Mirror Neurons and the Brain in the Vat. The framework is clearly Cartesian, yet the slant is towards dissolving mind/body duality.
Quote:
...
There were people called "extropians" at MIT who said, "We'll become immortal because everything that's important about us is what we can transmit on the Internet and everything else is just that mortal part of us, we can leave it behind." So, there was all this anti-body hype, and there's where Merleau-Ponty comes in, because in the quote you read, Merleau-Ponty says that it's our body with its skills which enables us to relate to things by going around them, and relate to people by this interesting thing called intercorporeality, where I don't have to figure out from your gestures and how you look what you're thinking and what you're doing, I respond immediately with my gestures and my look. In Merleau-Ponty that intercorporeality seemed to him magical. Whenever he couldn't explain anything that was just his word for it.
Now they've discovered something called mirror neurons, where it turns out that the same neurons in apes that perceive a certain movement, if I'm grasping for something [for example], also are the neurons that produce the movement, so that it's no accident that when you see me doing it, you do an appropriate thing. I think of this myself -- they never mention yawning, but yawning would be the clearest case of this. Yawning is intercorporeality. If things are boring and I yawn, you don't have to figure out what it meant, you can't help but yawn. So, Merleau-Ponty has all the ways the body is, as he puts it, geared into the world. And that's what the Internet definitely leaves out.
I just have to put in another comment -- a book I didn't show you because I didn't write it, but I published it, in a sense. There was a fellow graduate student named Samuel Todes who was very influential on me. I didn't mention him when we talked about my graduate [years], but if I went into Continental philosophy it was also largely because he was the only one I could talk to. But he has this idea -- it's very important -- that the body has a structure. In Merleau-Ponty you hear always that the body has this capacity to act, to be open to the world, to go around objects, but Todes says, "Well, we've got a front and a back, an up and a down, we move forward more easily than we move backward, we can't protect ourselves from behind." There's a lot to having a body that Merleau-Ponty doesn't see. So, I published Todes' book, Body and World, because I think it's the next stage that people will have to pay attention to. I talked about it in my presidential address. This says that until computers could (which I don't think they ever will,) have bodies enough like ours, and feelings like ours, they can't be intelligent.
And now the Internet: if we were disembodied on the Internet, we wouldn't be able to acquire skills, we wouldn't be able to see what was relevant and not relevant, we wouldn't be able to relate to other people. So, the Internet turns out to be a marvelous case of a counter-example or experiment about what you can and can't do without a body.
...
I've read "Being and Time" and "The Phenomenology of Perception" as well as a few books written by the extropians and to me Dreyfuss' criticisms are largely on the mark. However, it seems to me that rather than trying to craft a model of the world and a model of the body, more could be gained by conceptualizing what the world of a computer would be like and seek to craft an AI from that perspective. And from there attempt to move forward adapting that rudimentary AI into one capable of relating to the real world. Heideggereans use the term "world" to mean something different from the everyday sense of the word. It doesn't mean the earth, it means something more subjective - the totality of things that have meaning for the individual - in emotional as well as physical space (and I really don't do the concept justice with that explanation). From this perspective what I suggest above is more along the lines of saying that the AI field would make more ground beginning with a conception of todays computers as something like bacteria or simple life forms and building up from there. It may not be that computers are disembodied, only that we haven't yet figured out how to conceptualize the kinds of bodies they have and the kinds of worlds those bodies relate to. Perhaps there has been movement in this area and I'm not aware of it, but it seems far removed from the extropian visions that I'm familiar with.
As an interesting aside regarding mirror neurons, this essay by a leading neuroscientist discusses the implications of what their discovery means: Mirror Neurons and the Brain in the Vat. The framework is clearly Cartesian, yet the slant is towards dissolving mind/body duality.
"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
I don't think that having a cartesian "internal world representation" is so wrong. I do think that we (humans) don't really go through such an intermediate before acting, but that doesn't mean it's not possible. It could in fact make reasoning easier, no matter the algorithms you choose, because you can make some simplifying assumptions (which may not all be wrong). And if you think about it, some animals, like cats, may in fact actively use such a virtual world representation. If you ever observe a cat before it jumps, it will spend time looking at the objective, and sometimes it will even move its back legs to jump but settle back down, as if he had predicted that his coordination would have resulted in a failed jump. It seems cats run simulations of their own jumps in their mind to determine what they should do not to hit any object.
As for the whole debate of "it it possible or not to have a computer that thinks", I would say that I believe it is. If we can do it, and we (and our brain) exist in this physical world, then it's possible to construct some kind of machine that does it. After all, our brain is a computer too. Where I think they are wrong is not in the "internal representation of the world", but in the cartesian reasoning approach.
I believe that our brain works mostly as an approximator, making choices heuristically, following a few basis principles, but mostly using what is learned, and taking educated guesses. We do not go through a lengthy mathematical reasoning process, not unless we force ourself to. We simply navigate our thoughts in a fluid and organic manner.
I would think that it is possible to build a "pretty smart" AI using cartesian reasoning means. Maybe something along the lines of the computer in star trek, but never a self-aware entity that is capable of learning as well as we can. For that, we will need neural networks/machine learning, or some kind of extremely dynamic and organic system that does not base it's reasoning entirely on rigid mathematical rules.
As for the whole debate of "it it possible or not to have a computer that thinks", I would say that I believe it is. If we can do it, and we (and our brain) exist in this physical world, then it's possible to construct some kind of machine that does it. After all, our brain is a computer too. Where I think they are wrong is not in the "internal representation of the world", but in the cartesian reasoning approach.
I believe that our brain works mostly as an approximator, making choices heuristically, following a few basis principles, but mostly using what is learned, and taking educated guesses. We do not go through a lengthy mathematical reasoning process, not unless we force ourself to. We simply navigate our thoughts in a fluid and organic manner.
I would think that it is possible to build a "pretty smart" AI using cartesian reasoning means. Maybe something along the lines of the computer in star trek, but never a self-aware entity that is capable of learning as well as we can. For that, we will need neural networks/machine learning, or some kind of extremely dynamic and organic system that does not base it's reasoning entirely on rigid mathematical rules.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement