Advertisement

Poll question

Started by September 10, 2006 06:40 PM
25 comments, last by sharpnova 18 years, 4 months ago
Hi! I have an interesting question that I think we programmers can enjoy discussing. Do you think it's possible to create a program that emulates human logic? Why or why not?
Why would you want to recreate human logic in a machine? Humans are very poor at logic and typically don't follow logical lines when performing reasoning tasks. There has been plenty of research to suport this. Could you suggest a scenario in which it would be useful to have a computer perform poorly on a logical inference task, as a human would?

Cheers,

Timkin
Advertisement
XD Ha ha! Exactly why I used the word "emulate."

So you believe it's possible to create a program that could perform logical operations even better than a human? And I'm not meaning simple mathematical operations, but actually solving life problems deductively.

[EDIT] Because of course computers have already advanced beyond us in solving mathematical problems [/EDIT]
How is the human mind not already a computer?
Artifical neural networks have already been created that can solve human-like problems (albiet very specific ones) better than humans themselves. The fact that we can simulate mid-level neuronal behaviour so accurately allows us to create networks that can tackle anything in the same manner grey matter does [pinch of salt not included].

Our successes can only lead us to logically ([wow]) deduce that given sufficient processing power we could produce arbitrarily intelligent organisms, though not necessarily anything superior (for a very bright human is, alas, still human). If you ask me, I'd say anything with a larger, faster, better adapted brain than a human will be more intelligent than a human.

Regards
Admiral
Ring3 Circus - Diary of a programmer, journal of a hacker.
Quote:
Original post by TheAdmiral
Artifical neural networks have already been created that can solve human-like problems (albiet very specific ones) better than humans themselves. The fact that we can simulate mid-level neuronal behaviour so accurately allows us to create networks that can tackle anything in the same manner grey matter does [pinch of salt not included].

Our successes can only lead us to logically ([wow]) deduce that given sufficient processing power we could produce arbitrarily intelligent organisms, though not necessarily anything superior (for a very bright human is, alas, still human). If you ask me, I'd say anything with a larger, faster, better adapted brain than a human will be more intelligent than a human.

Regards
Admiral


That's an extraordinary claim. Extraordinary claims need extraordinary evidence. Where is the evidence that we can build ANNs for anything remotely resembling intelligence?
Advertisement
Quote:
Original post by TheAdmiral
Artifical neural networks have already been created that can solve human-like problems (albiet very specific ones) better than humans themselves. The fact that we can simulate mid-level neuronal behaviour so accurately allows us to create networks that can tackle anything in the same manner grey matter does [pinch of salt not included].

Our successes can only lead us to logically ([wow]) deduce that given sufficient processing power we could produce arbitrarily intelligent organisms, though not necessarily anything superior (for a very bright human is, alas, still human). If you ask me, I'd say anything with a larger, faster, better adapted brain than a human will be more intelligent than a human.

Regards
Admiral


Artificial Neural networks have but a vague ressemblance to biological neural networks. That name should be banned to hell and replaced with "piecewise linear regression" or any other semantically meaningful name. Good implementations are not even network nor neuron-like. We currently CANNOT tackle ANYTHING in the same manner grey matter goes. Biological researchers barely understand how biological neural networks works, let alone us computer scientists. Performance of ANN, and other NL regression techniques, in terms of results, is NOT a question of processing power. It is MUCH more complicated than that.

Finally, the claim that ANN can solve some problems more efficiently than humans is irrelevant, and does not indicate it emulates or surpass or could attain human logic capabilities. A simple loop can solve many problems more efficiently than anyone.

Dont hesitate to PM me if you want to learn more about the magic world of machine learning. Timkin is also much more knowlegable than me about this.

As for the original question, it is impossible without first understanding human logic. As there are several fields of thought in the psychology world, there are also several class of approaches in artificial intelligence that tries to emulate them. In my particular field of application (Computer Vision), methods inspired by psychology (Ghestal?) research used to be quite popular years ago, with mixed results.
Quote:
Original post by hh10k
How is the human mind not already a computer?


The human mind 'computes', but is not a 'computer' if we take the common usage of the term (meaning an implementation of a Turing machine).

Quote:
Original post by Steadtler
Artificial Neural networks have but a vague ressemblance to biological neural networks. That name should be banned to hell and replaced with "piecewise linear regression" or any other semantically meaningful name. Good implementations are not even network nor neuron-like.


I think it is important to delineate between feed-foward, multi-layer perceptrons and the class of models call 'neural networks'. Unfortunately, due mostly to bad press, poor education and the 'band wagon' of writing books on MLPs, most people identify the term 'neural network' with an MLP network. While I'm certain that Steadtler is aware of other architectures, most people are not and hence I want to add a few words on the topic.

There are indeed architectures that very closely mimic biological methods for information transfer and processing. While I agree they are not yet at an implementational level to rival the complexity of the networks we know to exist within the brain, much of the understanding of how these networks process and propogate signals has been set down in literature. We know, for example, exactly how the human olfactory system stores and learns to delineate between different smells, how this information is encoded in the neurons of that system and how the dynamics work when you smell things you recognise and things you dont. We know how in the vision systems of certain animals and insects orientation is represented... and velocity (and we're fairly sure of how these are handled in primates like us). We're even coming to an understanding of how these parallel information processing systems implement diverse dynamics and yet remain essentially stable. These are just two examples of the wealth of understanding of neuronal systems.

What we're lacking really is the bits between the cracks (the joining together of the many theories of information processing in parallel systems into a unitary whole) and the larger picture; how does cognition arise from these component parts. I don't want to hijack this thread into a philosophical discussion of this latter, because we're already had enough of those around here. I just want to reiterate the point I made above; while we cannot yet create an implementation of a human brain (other than growing one), we're well on the way to understanding how it does what it does.

Now, if we want to discuss what 'human logic' is, that's a far more pertinent exercise. The first thing we should do is throw out the word 'logic' when referencing human inference methods... because humans are not logical (i.e., their methods of inference don't form a closed set of self-consistent rules that permit both deductive and inductive reasoning). It's very easy to show that humans make common, repeatable mistakes of logic.

So, let's talk about modelling 'human inference'...

Cheers,

Timkin

[Edited by - Timkin on September 12, 2006 7:22:11 PM]
Yeah, that's exactly why I wanted to start this thread. I'm learning a lot. ~clap clap~

I think I worded the question incorrectly, though. There's a human slip of logic for ya. I guess I should have asked if computers could possibly mimic the human mind, and simply stuck with that word. It's true that computers exceed us in logical ability, and that will most likely always be true.

What I think makes human beings so much better at solving problems (disagree if you like, but remember tha humans created computers in the first place) is intuition, which computers could never possess, unless a different, non-logical system of programming is created.

I know this is a bit off topic, but I do believe that computers could have emotions, or at least mimic them. Besides, what are human emotions but programmed reactions? We learn to be angry the first time we see someone around us get angry. We learn what makes others sad and begin to become sad at things ourselves.

Hmm... I don't really know anything. It's just fun to discuss stuff I think about.

Steadtler, can I PM you to learn more about machine learning?

Thanks,

Mage
Quote:
Original post by mageofdreams

What I think makes human beings so much better at solving problems (disagree if you like, but remember tha humans created computers in the first place) is intuition, which computers could never possess, unless a different, non-logical system of programming is created.

Mage



That intuition you are talking about, isn't it just a process of try and fail??? The intuition could be wrong, so we fail. Then we have another "intuition" and we try.. We could have sucess or not.
Just like the most common methods for machine learning.. Fail and try..

This topic is closed to new replies.

Advertisement