Advertisement

Neuron networks and a fly

Started by July 10, 2000 04:42 PM
2 comments, last by avoden 24 years, 7 months ago
I see a lots of posts about what if we create AI wich simulates human mind and what does it mean for us. From my mind I see that whatever we can''t predict has potential of becoming assosiasion of the living thing. Lets take all the laws of physics - before humans could discover them all natural phenomenas were considered as act of God (intelegent being). Remember before cristianity how many were gods (Egypt, Greece ,...). Now we have fewer mysteries and look now - all space just a cold vacuum, lighting, hurricains - just another result of equasion called Earth. But more and more predictable. Now lets take a fly. If I put fly in my game and can make it act as if living one - I found the lay called "Fly AI" and all over the sudden I can tell "Hmm. Its not living thing, but rather rule. I can even predict what its going to do *if* I given enough input about that fly". Fly doesn''t have as many neurons or behaviour as human. Take few millions of artificial newrons, give them simulators of the visual and others input like a fly and *close* enough this AI *will* simulate fly. Point is you''l have to spend tons of time just teaching this Nueron Network (NN) correct behaviour. It is not future, but it is NOW. Problem is computing power is still weak for that and noone spent time teaching NN behaviour of the fly. As for humans. Task of creating 100% human AI, in my opinion, theoretically impossible. As soon as you know human behaviour means that you know your self. You become predictable. But knowing that human can say "Il behave espesially opposite to the law" and thus breaks the law. So given power of making choises human can break ANY laws about himself - evolution. I think even if you will apply your knowledge by harming flys they will adopt and change their behaviour, thus breaking the law. What Im trying to say? You can simulate behaviour or even whole person in frozen moment of the time but I can''t see how you cansimulate evolving mind - mutation of behaviour. So I dont think there is any thread to human ego here as long as humans evolve. Dreamer

...But knowing that human can say "Il behave espesially opposite to the law" and thus breaks the law...

but isn''t the *act* of simulating human intelligence using the
replica of the brain going to capture the behaviour of such
''evolution'' of intelligence?

What if you simulated two copies of the same human brain, and let one copy percieve the other as running on a ''virtual machine''. Then the first copy could act as a real human would
in the example given; it could see what behaviour it is
predicted & deliberately act differently.

Advertisement
If we have VM and human:
I think, by building AI in VM you know all inputs and all variables in the formula thus you *can* predict next step. The fact that we programmed it and know *all* inputs - makes it a law.
If we have 2 humans we dont know all inputs and history of that person and thus can''t predict behaviour.

Another point just came to me:
humans are open systems - we change by outside events. If we want to make perfect AI we have to give that VM same varieties of inputs thus allowing it mutate/evolve.

Andrei
first - excuse my english, its a little rusty.

i think human intelligence works so it cant be predicted. no matter how much you know about the persons history and inputs. some devices in nature give output which is 100% random (not like computer random which is predictable). a computer AI will be predictable unless you put a piece of hardware in it that will give pure random numbers and use them in the decision takeing process. Still, even if its predictable it doesnt mean itss not intelligent.

- Iftah

This topic is closed to new replies.

Advertisement