Reasoning Engines?
I personally believe that the best way to create an intelligent (or even sentient) computer would be to use neural networks in it. However, I read some time ago that alot of people in AI research don't. Some people seem to think that it should be possible to use AI algorithms to implement a thinking machine.
I would be interested in knowing more about this perspective. Are there theories or papers out there about "thinking" or "reasoning engines"? About programs that try to act ingelligently. Is there much research in imitating the thinking process of the human brain?
The only remotely intelligent AI that you can communicate with I found so far is the alice bot. But as you communicate with her, it quickly becomes obvious that she's not really intelligent. The reason is probably because she's not really learning. She can only chat using bits of phrases and prewritten answers stored in a database or something along those lines.
"Reasoning engines", at least in their current form, have basically nothing to do with ANNs. There's this popular misconception that ANNs have great unrealized potential for use in the next generation of AI simply because they simulate the actual operation of the human brain. The thing is, they don't. They have a couple of commonalities with neural tissue, but in operation and capabilities they are so dissimilar that they're really used for completely different things. ANNs should be thought of as just another black-box function learning system.
If you want to learn more about how computers can reason, I suggest you read up on "propositional logic", a formalism for knowledge representation. The next step up from that is "first-order logic", a more flexible and powerful representation that computers are unfortunately bad at reasoning with (though they're getting better). Second-order logic goes even further.
Your implied question, of course: is the most promising direction for AI one that recreates the conditions of human intelligence and trusts the machine to do the rest, or one that constructs algorithms for accomplishing the same tasks as the human brain? The first possibility, of course, is much sexier. But in several decades of AI, it hasn't really borne much fruit. That's not to say it won't; but most researchers are concentrating on the second approach.
Lastly, I would note that most serious AI researchers consider "chat-bots" such as Alice to be not worth their time. A cynic might consider this to be sour grapes on their part, but they've made convincing arguments that having a program that pretends to be a human is not the acid test of AI that it was once thought to be. So you might want to reconsider how you define--and, more importantly, test--"intelligence".
If you want to learn more about how computers can reason, I suggest you read up on "propositional logic", a formalism for knowledge representation. The next step up from that is "first-order logic", a more flexible and powerful representation that computers are unfortunately bad at reasoning with (though they're getting better). Second-order logic goes even further.
Your implied question, of course: is the most promising direction for AI one that recreates the conditions of human intelligence and trusts the machine to do the rest, or one that constructs algorithms for accomplishing the same tasks as the human brain? The first possibility, of course, is much sexier. But in several decades of AI, it hasn't really borne much fruit. That's not to say it won't; but most researchers are concentrating on the second approach.
Lastly, I would note that most serious AI researchers consider "chat-bots" such as Alice to be not worth their time. A cynic might consider this to be sour grapes on their part, but they've made convincing arguments that having a program that pretends to be a human is not the acid test of AI that it was once thought to be. So you might want to reconsider how you define--and, more importantly, test--"intelligence".
I'm going to disagree, personally I have a lot of faith in connectionist style approaches. I think the reason that current ANNs are nothing better than function approximators is simple: everybody is making feed-forward networks. It's true that a feed-forward network can't be anything more than a function approximator. But a recurrent network can be a lot more, I think. Of course, recurrent networks are that much harder to train, and it would have to be a pretty large network before you got any interesting behavior. But there's potential there, I think.
The problem I have with symbolic approaches is that they are trying to solve a problem which may very well be impossible. First-order and second-order logic systems are good ideas in theory, but their computational requirement goes up exponentially with the amount of knowledge. They become unusable after a few thousand propositions. And we look at these systems and ask, "why can't these systems have common sense, when it's so easy for people?" The thing is that symbolic approaches are usually trying to solve a problem that's harder than what the brain does. The brain is not a perfect reasoning system. How do we know that it's not a fundamentally intractable problem to build a perfect reasoning system? Whereas with a connectionist-style approach, we know that it's possible, and we have an estimate of how difficult it is.
Anyway, for good reading, there's the book On Intelligence, which basically argues the same thing- that a brain-inspired approach is the way to go.
The problem I have with symbolic approaches is that they are trying to solve a problem which may very well be impossible. First-order and second-order logic systems are good ideas in theory, but their computational requirement goes up exponentially with the amount of knowledge. They become unusable after a few thousand propositions. And we look at these systems and ask, "why can't these systems have common sense, when it's so easy for people?" The thing is that symbolic approaches are usually trying to solve a problem that's harder than what the brain does. The brain is not a perfect reasoning system. How do we know that it's not a fundamentally intractable problem to build a perfect reasoning system? Whereas with a connectionist-style approach, we know that it's possible, and we have an estimate of how difficult it is.
Anyway, for good reading, there's the book On Intelligence, which basically argues the same thing- that a brain-inspired approach is the way to go.
Your points are well-taken. I agree that AI in many cases attempts to solve a harder problem than the human brain has solved. For example, computers are much, much better at pathfinding than humans will ever be. I think this probably comes down to the academic context which has produced most of the advances in AI: a successfully "marketed" advance in AI will be one which is resistant to nitpickers pointing out pathological cases. But I don't think that that in itself indicates that the answer to "strong AI" is not in symbolic approaches; it may simply be that we're asking the wrong question. After all, there's no reason that more nuanced, less accurate algorithms can't be represented algorithmically rather than as ANNs. So I'd offer a proposition which is opposite to yours: connectionist approaches to AI may in the ideal case simply perform operations that are congruent to those of symbolic reasoning algorithms, yet do so in a much less efficient fashion.
The reason neural networks are so important is that they allow learning and adaptation. If a system uses a fixed algorithm, or a set of algorithm for its "intelligence", then these algorithms, which may be very powerful, can become a limiting factor to the expansion of this intelligence. It would seem that for a system to be truely intelligent (which requires adaptation, otherwise many new problems cannot be solved), the system has to be able to re-program itself. This is possible with neural networks, but very hard to achieve with a programmed system.
Neural networks produce approximations very fast, they look for "where the solution should be", while programmed systems tend to look for "a valid solution". I personally can't really imagine how one would program a system that could reprogram itself to learn new things. Certainly, one can construct a huge database of facts and logical propositions, but for a programmed AI to expand on this database and learn from it, it becomes very difficult.
I had an idea earlier on how to illustrate this difficulty of adaptation in a programmed AI:
Imagine you programmed an AI which could understand written English through a set of subroutines, and was able to answer questions and reference a knowledge base. Now imagine this AI needs to learn to understand French. How does it proceed to learn an entirely new language?
Neural networks produce approximations very fast, they look for "where the solution should be", while programmed systems tend to look for "a valid solution". I personally can't really imagine how one would program a system that could reprogram itself to learn new things. Certainly, one can construct a huge database of facts and logical propositions, but for a programmed AI to expand on this database and learn from it, it becomes very difficult.
I had an idea earlier on how to illustrate this difficulty of adaptation in a programmed AI:
Imagine you programmed an AI which could understand written English through a set of subroutines, and was able to answer questions and reference a knowledge base. Now imagine this AI needs to learn to understand French. How does it proceed to learn an entirely new language?
Quote: Original post by Sneftel
So I'd offer a proposition which is opposite to yours: connectionist approaches to AI may in the ideal case simply perform operations that are congruent to those of symbolic reasoning algorithms, yet do so in a much less efficient fashion.
Yeah, I can agree with that. I heard a great quote once, "Neural networks are the second best method of solving any problem". [grin]
But (in my humble opinion), I think the most likely path is that the neural network version of the brain will come first, and the optimized symbolic version will come out of that. It would take an extremely clever and prescient programmer to go straight to the symbolic version. Like you said, "it may simply be that we're asking the wrong question", and I think knowing which question to ask is the hard part.
my 50 cents :-)
Why do we have AI studies?
To understand different asspects of intelligence, so that
we can reproduce them in different applications?
Why do we need artificial, when we have the real thing?
Because the real thing is to complex to understand totaly.
So if we make 10 billion artificial neurones emerge some sort
of intelligence, will we understand it?
(a 50 neuron ANN is hard to debug and understand)
And then, if we dont understand what we created, can we still call
this progress in the studies of AI? (since the point is to understand)
Remember: The human body are controled by neurones, but robots uses
electrical wires. They both to the job, but the neurons are harder to
understand, and breake more easely.
Also: A human uses about 18 years before its intelligence is fully developed,
and all the knowleage is in place. You dont need a giga database, you need
a learning system, and a scool and an enviroment where it can evolve its knowlage. You learn to be smart! :-)
Im thinking that unified systems (with different methods for different purposes) is the answer!
Why do we have AI studies?
To understand different asspects of intelligence, so that
we can reproduce them in different applications?
Why do we need artificial, when we have the real thing?
Because the real thing is to complex to understand totaly.
So if we make 10 billion artificial neurones emerge some sort
of intelligence, will we understand it?
(a 50 neuron ANN is hard to debug and understand)
And then, if we dont understand what we created, can we still call
this progress in the studies of AI? (since the point is to understand)
Remember: The human body are controled by neurones, but robots uses
electrical wires. They both to the job, but the neurons are harder to
understand, and breake more easely.
Also: A human uses about 18 years before its intelligence is fully developed,
and all the knowleage is in place. You dont need a giga database, you need
a learning system, and a scool and an enviroment where it can evolve its knowlage. You learn to be smart! :-)
Im thinking that unified systems (with different methods for different purposes) is the answer!
-Anders-Oredsson-Norway-
Quote: So if we make 10 billion artificial neurones emerge some sort
of intelligence, will we understand it?
(a 50 neuron ANN is hard to debug and understand)
And then, if we dont understand what we created, can we still call
this progress in the studies of AI? (since the point is to understand)
I was wondering about this some time ago. I was wondering if it would be possible to understand, for example, the algorithm that a neural network used for face recognition has internally developed through learning. Then I realised it was irrelevant. When neural networks learn, they do it by adaptation. Their intelligence comes from their capability to adapt, not from snapshots of the states of circuits they develop when learning. Neural networks are simply natural intelligence.
As far as how to understand, conceive and use them. The best we can do is to come up with basic models that are efficient. Then creating something out of them is a matter of combining multiple neural networks together. Our brain, for example, is not just a mass of neurons. It is organized into many smaller neural networks, which are linked together. Its organization gives it some inherent purpose and function. If the brain were just a big pack of randomly organized neurons, there would be no naturally emergent behavior at all.
So my answer is, if we make a 10 billion artificial neural network, we will *have* to understand it for it to work. Because we will have to compartment this network into smaller sub-units, and conceive it for a purpose. It would kind of surprise me that we would create a sentient computer with no understanding of how it works, just by chance. Neural networks don't work by chance.
Quote: Also: A human uses about 18 years before its intelligence is fully developed, and all the knowleage is in place. You dont need a giga database, you need a learning system, and a scool and an enviroment where it can evolve its knowlage. You learn to be smart! :-)
Well, we learn to be *smarter*, certainly, but we still have some natural intelligence. Take my cat, for example. It was born with instincts. Programming that has genetically evolved, to start with. He knows how to eat, drink, walk. He also knows fast objects coming at him at to be avoided. Nobody had to teach him that. This is not really a proof of intelligence (might as well be automated). However, he has proven capable of learning. He often sleeps on my office chair. I used to have to force him out when I needed to sit. But since, I developed a trick. I faked that I'm was to sit on him until he's too scared and goes off by himself. Now, as soon as he sees me approach the chair, he gets off. He's learned that I need to sit on this chair, and that I can force him off if I have to. He's also learned to find all the most comfortable spots in the house to sleep at, and at which time those spots are free (he only sleeps on my bed during the day).
My point is that our intelligence is natural. It only starts from a few basic programmed (instinctive) principles, but its bound to naturally expand with time.
Hi,
I've read somewhere that there are things a neural network base text recognition can figure out, but for humans it's easy.
So neural network are not allways the answer!
I've read somewhere that there are things a neural network base text recognition can figure out, but for humans it's easy.
So neural network are not allways the answer!
“Always programm as if the person who will be maintaining your program is a violent psychopath that knows where you live”
August 24, 2005 03:44 PM
The truth is that neural networks are not even that popular anymore in machine learning research. For classification and regression, they have been replaced by support vector machines and gaussian process based models. For time series stuff (i.e. stuff you would use a recurrent neural network for) they have been replaced by hidden markov models, conditional random fields, etc.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement