Advertisement

Formalizing Thought

Started by January 17, 2006 11:36 AM
17 comments, last by JD 18 years, 9 months ago
Yes, I got the link wrong and yes that was the link I intended. The John McCarthy link is interesting, but it doesn't surprise me that he would seek to refute Dreyfuss - considering that their debate goes back to 1957. Did you read the entire interview?

Quote:
...
There were people called "extropians" at MIT who said, "We'll become immortal because everything that's important about us is what we can transmit on the Internet and everything else is just that mortal part of us, we can leave it behind." So, there was all this anti-body hype, and there's where Merleau-Ponty comes in, because in the quote you read, Merleau-Ponty says that it's our body with its skills which enables us to relate to things by going around them, and relate to people by this interesting thing called intercorporeality, where I don't have to figure out from your gestures and how you look what you're thinking and what you're doing, I respond immediately with my gestures and my look. In Merleau-Ponty that intercorporeality seemed to him magical. Whenever he couldn't explain anything that was just his word for it.

Now they've discovered something called mirror neurons, where it turns out that the same neurons in apes that perceive a certain movement, if I'm grasping for something [for example], also are the neurons that produce the movement, so that it's no accident that when you see me doing it, you do an appropriate thing. I think of this myself -- they never mention yawning, but yawning would be the clearest case of this. Yawning is intercorporeality. If things are boring and I yawn, you don't have to figure out what it meant, you can't help but yawn. So, Merleau-Ponty has all the ways the body is, as he puts it, geared into the world. And that's what the Internet definitely leaves out.

I just have to put in another comment -- a book I didn't show you because I didn't write it, but I published it, in a sense. There was a fellow graduate student named Samuel Todes who was very influential on me. I didn't mention him when we talked about my graduate [years], but if I went into Continental philosophy it was also largely because he was the only one I could talk to. But he has this idea -- it's very important -- that the body has a structure. In Merleau-Ponty you hear always that the body has this capacity to act, to be open to the world, to go around objects, but Todes says, "Well, we've got a front and a back, an up and a down, we move forward more easily than we move backward, we can't protect ourselves from behind." There's a lot to having a body that Merleau-Ponty doesn't see. So, I published Todes' book, Body and World, because I think it's the next stage that people will have to pay attention to. I talked about it in my presidential address. This says that until computers could (which I don't think they ever will,) have bodies enough like ours, and feelings like ours, they can't be intelligent.

And now the Internet: if we were disembodied on the Internet, we wouldn't be able to acquire skills, we wouldn't be able to see what was relevant and not relevant, we wouldn't be able to relate to other people. So, the Internet turns out to be a marvelous case of a counter-example or experiment about what you can and can't do without a body.
...


I've read "Being and Time" and "The Phenomenology of Perception" as well as a few books written by the extropians and to me Dreyfuss' criticisms are largely on the mark. However, it seems to me that rather than trying to craft a model of the world and a model of the body, more could be gained by conceptualizing what the world of a computer would be like and seek to craft an AI from that perspective. And from there attempt to move forward adapting that rudimentary AI into one capable of relating to the real world. Heideggereans use the term "world" to mean something different from the everyday sense of the word. It doesn't mean the earth, it means something more subjective - the totality of things that have meaning for the individual - in emotional as well as physical space (and I really don't do the concept justice with that explanation). From this perspective what I suggest above is more along the lines of saying that the AI field would make more ground beginning with a conception of todays computers as something like bacteria or simple life forms and building up from there. It may not be that computers are disembodied, only that we haven't yet figured out how to conceptualize the kinds of bodies they have and the kinds of worlds those bodies relate to. Perhaps there has been movement in this area and I'm not aware of it, but it seems far removed from the extropian visions that I'm familiar with.

As an interesting aside regarding mirror neurons, this essay by a leading neuroscientist discusses the implications of what their discovery means: Mirror Neurons and the Brain in the Vat. The framework is clearly Cartesian, yet the slant is towards dissolving mind/body duality.
"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
I don't think that having a cartesian "internal world representation" is so wrong. I do think that we (humans) don't really go through such an intermediate before acting, but that doesn't mean it's not possible. It could in fact make reasoning easier, no matter the algorithms you choose, because you can make some simplifying assumptions (which may not all be wrong). And if you think about it, some animals, like cats, may in fact actively use such a virtual world representation. If you ever observe a cat before it jumps, it will spend time looking at the objective, and sometimes it will even move its back legs to jump but settle back down, as if he had predicted that his coordination would have resulted in a failed jump. It seems cats run simulations of their own jumps in their mind to determine what they should do not to hit any object.

As for the whole debate of "it it possible or not to have a computer that thinks", I would say that I believe it is. If we can do it, and we (and our brain) exist in this physical world, then it's possible to construct some kind of machine that does it. After all, our brain is a computer too. Where I think they are wrong is not in the "internal representation of the world", but in the cartesian reasoning approach.

I believe that our brain works mostly as an approximator, making choices heuristically, following a few basis principles, but mostly using what is learned, and taking educated guesses. We do not go through a lengthy mathematical reasoning process, not unless we force ourself to. We simply navigate our thoughts in a fluid and organic manner.

I would think that it is possible to build a "pretty smart" AI using cartesian reasoning means. Maybe something along the lines of the computer in star trek, but never a self-aware entity that is capable of learning as well as we can. For that, we will need neural networks/machine learning, or some kind of extremely dynamic and organic system that does not base it's reasoning entirely on rigid mathematical rules.

Looking for a serious game project?
www.xgameproject.com
Advertisement
While I agree with Max_Payne's last statement playing the devil's advocate, who is to say that there isnt some cartesia reasoning system going on subconsiously, I dont know about you but I have no clue what my subconsious is doing and it responsible for a vast majority of human activity whether we realise it or not
Quote: Original post by Downer
While I agree with Max_Payne's last statement playing the devil's advocate, who is to say that there isnt some cartesia reasoning system going on subconsiously, I dont know about you but I have no clue what my subconsious is doing and it responsible for a vast majority of human activity whether we realise it or not


Well, it's mostly a matter of the nature of neural networks. They aren't really good at hard mathematical computations relying on a large set of rules and a dataset. Neural networks are good at estimating things and "learning". By structuring them in a particular way, you can make them follow simple processes, but that's about it. You could hardly build something like a programmable computer out of neural networks.

Usually, evolution goes towards the simplest solution. This most likely means interacting directly with the world than through some proxy "internal representation" of the world. As far as the "subcoscious", it's a wild hypothesis, but you can't really prove it even exists. I believe that most of the time, we *are* conscious of all our thoughts. It's mostly a matter of whether you are honest with yourself or not. Unless by "the vast majority of human activities", you mean breathing, digesting, etc... In which case this is mostly an automated process relying on some semi-independent parts of the brain... And that's actually a good thing, because I wouldn't want to forget to breathe or forget to digest my food.

Looking for a serious game project?
www.xgameproject.com
Quote: Original post by Max_Payne
You could hardly build something like a programmable computer out of neural networks.
Then you'll find this interesting reading [smile]
Quote: Original post by lucky_monkey
Quote: Original post by Max_Payne
You could hardly build something like a programmable computer out of neural networks.
Then you'll find this interesting reading [smile]


Kind of going in the direction I was. It would be very difficult for neural networks to naturally evolve in the direction of programmable computer logic, or to engineer them to be used in that way.

Plus, correct me if I'm wrong, but doesn't this, in a way, show that neural networks are more flexible than turing machines, or at least as flexible. In which case it would be rather pointless to emulate turing machines with neural networks, since it's just adding more overhead.

Looking for a serious game project?
www.xgameproject.com
Advertisement
The problem with thinking machines is that nobody will recognize one when they see one.
Free Mac Mini (I know, I'm a tool)
All this theory vs. practice talk reminds me of the grand Decompiler Debates, in which the critics like to bring up theory like the Halting Problem.

It's funny to me... I've written a (mostly) functional IA32 decompiler that does everything important except for type constraints and memory-related SSA, wtich properly tears a 400Kb PE into about 140,000 lines of marked-up pseudocode that looks like a cross between Assembly and C.


The moral of this story is that even if there's a theory that says that doing something perfectly in all cases is impossible, you shouldn't give up on making something that does a "good job" in "most cases".

From a software design point of view rather than a philosophical point of view, I suspect AI researchers never get very far because they get into deadlocks of self-debates with themselves while designing something. Either that or they try to create a solution without a problem (which leads to "what if" deadlock in design stage).

If someone just sat down and said "I want to do a comparison between this input sentence and some database of trigger sentences and perform actions if a match within an error margin of E is found", they might get a lot futher into an adaquate solution.

If they instead say "I want to make an AI that understands English", they've defined their problem in an ambiguous way, and they won't be able to focus on solving any useful problems.
My theory on thought is that we link things together to form thought. For example, when you open your eyes a flood of information is recorded onto a frame. The brain then deciphers objects in that frame and categorizes them into groups, etc. and also creates links or associations for easier retrieval. Then the brain substitutes objects during thought. Say today you killed a raven with your bow and arrow. Tomorrow you see a different bird flying around. You quickly recall the raven and put it into the new bird's place and thus you conclude that this new bird could also be killed with an arrow just the same as raven was. This could go for tools as well. Substitute wooden handle for a metal one for example. How many times have you solved a problem thru substitution? I have a lot. Need a cap for the car battery and don't have one? Make one out of cork after remembering to put a cork into shampaigne last year that stopped the flood of liquid from the bottle.

The more information you have the easier is to solve a problem. So we need memory to record this information and sort it out and store efficiently so we can recall it easily. There is also faith or logical jumps that we make and that's part of try and error method that's valuable. We don't think much in that case we just make the decision and go with it even if it's in error. That's part of life. A lot of it is also learned from others. Their knowledge gets assimilated into yours and then you combine that with what you already know to do greater things. So memory recall is very important for our survival. You see a red colored snake, touch it, it bites you and you nearly die and so you remember not to touch red snakes again because its painful. Pain is then stored as some frame in your memory, perhaps big bubu on your hand and snake together. Or image of red snake is associated with your hand. Then when you run across a yellow snake, you sub in the red snake and realize that it might lead to another painful encounter so you don't touch the yellow snake. Then some tribesman shows you that you can eat the brown snakes and you associate the brown snake with food but keep the yellow and red snakes out of your diet. So color then gets associated with snakes and food. As you can see it can get quite complex.

This topic is closed to new replies.

Advertisement