Advertisement

Formalizing Thought

Started by January 17, 2006 11:36 AM
17 comments, last by JD 18 years, 9 months ago
I'm curious. Has anyone ever attempted to make a formal definition of thoughts, in a mathematical way (as would be done in computer science), in the hope of programming a thinking system. Perhaps not someone in the area of mathematics or computer science, but perhaps in psychology, in the area of cognitive science. If someone was to attempt to program such a thinking program, some kind of formal structure would need to be created. Perhaps something including goals, knowledge and thoughts.

Looking for a serious game project?
www.xgameproject.com
Conceptual Representation, take a gander at http://www.generation5.org/content/1999/concept.asp in particular the output from the bit on IPP as you can get a feel for the structure of the data.

Also the Cyc project, tries to achieve this: http://www.cyc.com/
Advertisement
I think this is just what you're looking for: http://www.singinst.org/GISAI/
There have been many attempts to formalize logic, but according to Gödel's Incompleteness Theorem demonstrates that there are very real bounds on such things (which all applies to 'thought' if you include the relevant math as something that could be 'thought about').
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Hubert L. Dreyfus Interview: Artificial Intelligence

Quote:
...
The people in the AI lab, with their "mental representations," had taken over Descartes and Hume and Kant, who said concepts were rules, and so forth. And far from teaching us how it should be done, they had taken over what we had just recently learned in philosophy, which was the wrong way to do it. The irony is that 1957, when AI, artificial intelligence, was named by John McCarthy, was the very year that Wittgenstein's philosophical investigations came out against mental representations, and Heidegger already in 1927 -- that's Being in Time -- wrote a whole book against mental representations. So, they had inherited a lemon. They had taken over a loser philosophy. If they had known philosophy, they could've predicted, like me, that it was a research program. They took Cartesian modern philosophy and turned it into a research program, and anybody who knew enough philosophy could've predicted it was going to fail. But nobody else paid any attention. That's why I got this prize. I saw what they did and I predicted it, and that's the end of them.
...

"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
Quote: Original post by Extrarius
There have been many attempts to formalize logic, but according to Gödel's Incompleteness Theorem demonstrates that there are very real bounds on such things (which all applies to 'thought' if you include the relevant math as something that could be 'thought about').


Logic is formalized. Goedel's theorem has more to do with mathematical consistency and completeness and is strictly tied to axiomatic formal systems. But even then, thinking of ZFC, the bounds are almost invisible.
Advertisement
Quote: Original post by LessBread
Hubert L. Dreyfus Interview: Artificial Intelligence

Quote:
...
The people in the AI lab, with their "mental representations," had taken over Descartes and Hume and Kant, who said concepts were rules, and so forth. And far from teaching us how it should be done, they had taken over what we had just recently learned in philosophy, which was the wrong way to do it. The irony is that 1957, when AI, artificial intelligence, was named by John McCarthy, was the very year that Wittgenstein's philosophical investigations came out against mental representations, and Heidegger already in 1927 -- that's Being in Time -- wrote a whole book against mental representations. So, they had inherited a lemon. They had taken over a loser philosophy. If they had known philosophy, they could've predicted, like me, that it was a research program. They took Cartesian modern philosophy and turned it into a research program, and anybody who knew enough philosophy could've predicted it was going to fail. But nobody else paid any attention. That's why I got this prize. I saw what they did and I predicted it, and that's the end of them.
...

Great quote. [grin] But I don't think your link links what you think it links.
Quote: Original post by Sneftel
Quote: Original post by LessBread
Hubert L. Dreyfus Interview: Artificial Intelligence

Quote:
...
The people in the AI lab, with their "mental representations," had taken over Descartes and Hume and Kant, who said concepts were rules, and so forth. And far from teaching us how it should be done, they had taken over what we had just recently learned in philosophy, which was the wrong way to do it. The irony is that 1957, when AI, artificial intelligence, was named by John McCarthy, was the very year that Wittgenstein's philosophical investigations came out against mental representations, and Heidegger already in 1927 -- that's Being in Time -- wrote a whole book against mental representations. So, they had inherited a lemon. They had taken over a loser philosophy. If they had known philosophy, they could've predicted, like me, that it was a research program. They took Cartesian modern philosophy and turned it into a research program, and anybody who knew enough philosophy could've predicted it was going to fail. But nobody else paid any attention. That's why I got this prize. I saw what they did and I predicted it, and that's the end of them.
...

Great quote. [grin] But I don't think your link links what you think it links.


Maybe this? I disagree with a fair amount of what he says.
It does strike me as a bit Searle-esque. But I think it captures well the arrogance of early AI researchers who completely ignored the implications of philosophy on AI (as opposed to the implications of AI on philosophy).
Quote: Original post by Sneftel
It does strike me as a bit Searle-esque. But I think it captures well the arrogance of early AI researchers who completely ignored the implications of philosophy on AI (as opposed to the implications of AI on philosophy).


Yeah but he persists in his attack of AI reaserchers (till this day) and holds many far flung views and thinks to conclude on philosophy alone. See this article by John McCarthy on the topic. I dont agree with all of what he says either. They both go too far in their claims.

This topic is closed to new replies.

Advertisement