@vilem otte Thank you for the technical details.
I am not claiming current state of AI is satisfactory, I claim it is very promising.
If you manage to create a good network that is not overfit to only the tracks that it has been shown, it would be able to drive on unknown tracks. It does "think" inside(to say it in a way). Every single track in this universe has borders we don't want our car to cross. It could be walls, or rocks, or a lake. But this rule - “try to not cross the limits” is shared among all the tracks and a NN can learn it. Then it will learn another thing that is shared among all of the possible tracks - “the faster you go, the harder to turn” etc. A NN that is not overfit(or underfit) actually “thinks”.
It is still a lot of manual work to create a good NN. But i think the field of ML/AI is very promising.
NNs indeed do show signs of intelligence. It is hidden in the hidden layers. NNs can figure out features that the developer did not see. This behavior is a candidate for some kind of (non general) intelligence. Then what is intelligence… the job of a cat is to meow…
A good setup of NNs would be able to play various games. I mean, that goal of generalization - to be trained on tetris, and to play mario. It could be perfectly done in theory. It is just that NNs are a field in development yet. I talk about complex networks that are already available. And i can not setup them now. I am still learning. Having the perceptron as the base, they already offer very advanced setups of various compicated NNs working together.
What i don't like the most from NNs is - they still require lot of human attention. Lot of human guidance.
Theoretically, you can just throw the input to a network and it should learn anything.
Then, in practice, we are limited by computational power. That's why even the big corporations have to use CNNs, because even their supercomputers can not handle the computation. In practice, somebody trains a network for days and it results it is less effective than the network he trained the past week, and needs to restarts the training process again.
In practice it is meh, but theoretically, not sure how soon, we can just throw any input and the NNs will learn it. If we had enough computational power ofc. If there is some rule or hidden feature to be discovered, a NN surely should figure it out, again - in theory. In practice, the process is not so exciting.
My understanding is not reaching RNNs yet, but many researchers claim RNNs could become turing complete… if you manage to train it well.
About the need for vast computation, there is research being done in using analog hardware for Neural Networks. This could skyrocket the performance of NNs if they manage to introduce such hardware.
Right now, NNs/AI are not a miracle. Mainly because an expert needs to guide them constantly. But i think in the future, maybe a non expert would be able to train a NN to simulate any program he wants.
Right now not yet. Still i have to code manually ten years more for my traditional project, and in 10 years who knows how advanced NNs would be.
(imagine i give you eternal life elixir to drink. Then i tell you to manually create a NN that drives on any planar track. You have to manually slide every single of the weights of the network and manually decide how many layers, what connections and how many neurons to have inside. It would take you 1000 years to manually do it. But in the end it will drive on any track and it will be a NN “without an engine”. An absolutely perfectly trained(programed) NN.
This is why I use the word “theoretically” so often. It is possible for a sophisticated NN to do it, but it is unfeasible to train it to do this. Researchers are working on ways to improve the training and all about NNs in general. I think in 10 years we should see a very big breakthrough)