Quote:
Original post by Emergent
Yep. It seems silly to call that learning, right? But in essence that's all any function approximator, including a neural network, does.
Yes, it's silly, because that is not learning. What you're talking about is one-to-one memory mapping, that's nothing like what NN does. Neural networks is not just memory, it's also a CPU, it integrates *logic* together with information.
Why would anyone "train" some independent values in some static table if any such memory array can be populated by simply addressing a memory one-to-one?
When you train NN you change the dynamics of the whole system, not just memory, but the way it processes the information, the way it "thinks". Everything here is connected and interdependent, function/logic and memory, which is why it needs to be adjusted step-by-step, unlike anything else.
Static tables have only one static function, a simple "recall". Program searches the input database, then if and when it finds the exact match it returns whatever is stored there as output pair. It's one-to-one mapping, and if it wasn't, it would be random or probabilistic, but the important thing is - there is no *processing* here, which is why we call it a 'look-up table' and use it for optimization.
NNs can indeed learn, not just memorize. Try to map 255,168 combinations from simple 3x3 tic-tac-toe game one-to-one to a look-up table. That's about 510,336 bytes, yet the *logic* of it all can apparently fit in only 81 bytes by using ANN type of storage/processing.
Yes, you actually can show ANN: 5=2+3, 7=2+5, 9=1+8.... and it could learn ADDITION, not just results. There is simply not enough space in the universe to map logic one-to-one. The number of
COMBINATIONS for anything over two bits of input increases very dramatically. So, instead of to memorize the answers, ANN can somehow generalize it, like humans do, and use this generalized *logic* as a function to really calculate, not just recall, the answers and produce correct output even for inputs it has never seen before.
"Give a man a fish and he will eat for a day. Teach him how to fish and he will eat for a lifetime."Quote:
Original post by Emergent
3 - You can "calculate" the XOR of a and b by just looking at the (a,b)-th entry in the table and thresholding it.
What you have is no threshold, it was just some initial value.
Why do you think some static table would ever need to be initialized in such indirect way? You can populate the table directly, this is no ANN so you can just put the results exactly where and how you want them. You populated table with some fixed results, but you did it "slowly" in random steps. Why? Where did you ever see anyone is using this kind of learning methods on anything but neural networks?
Are you seriously suggesting any of those minimax or whatever other algorithms can compete with this:
void NetMove(int *m){ for(i=0;i<9;i++) for(j=0;j<9;j++) if(a&(1<<i)){ Out[j]+=Wgt[j]; if(Out[j]==25||Out[j]==41 || Out[j]==46||Out[j]==50) Out[j]+=Out[j]; } for(i=0,j=-9;i<9;i++){ if(j<Out && !(a&(1<<i)||b&(1<<i))) j= Out, *m=i; Out= 0; }}
TriKri,
Yes, it does not make any sense to use AI for physics equations. What EJH meant, most likely, is that physics is getting more and more complex, requiring more and more complex AI to be able to handle it, like driving a car.
Taking it further, eventually we might see AI walking and actually looking where it's gonna step next, which will then involve a lot of inverse kinematics type of physics/math, and only here you could substitute "physics of walking" by simulating muscle contraction and relaxation with NN, just as is done in robotics. Instead of driving a car AI would drive a body, instead of steering wheel and gas pedal, it would contract and relax appropriate muscles, while obeying whatever laws of physics program throws at it.