Hello, just diving into more understandings about neural nets and concepts.
Probably a little premature to think about optimization and compute times, but I guess I was looking for pointers regarding efficient architecture of neural nets.
If I'm approaching the classical “teach a rigged skeleton to walk from scratch” problem, there are a lot of inputs per joint, per joint velocity, momentum, etc. But then there are also a lot of inferred details. Giving it the COG of individual limbs might allow it to infer the overall COG, but it's clearly easy enough to give it the overall COG as an input. But that got me thinking about many other combinations, like whether I should give individual groups of joints their own COG, etc. as input. It might be important information that would save how much abstract learning its doing with depth. But then I realize one could get rather infinitely granular with every permutation of inputs to combine. If I'm giving it only inputs about its joints and which direction its facing, it could infer how far its feet is from the ground, but is it better to tell it? Then why not tell it how far every joint is above the ground? And also relative to every other joint, etc, etc? So is it better to give the neural net inputs it may never fully use than to have it abstract it through more layers/neurons?
The overall intent is to eventually teach it to box other AIs and use a RL tournament to continuously refine the fighters, fwiw, so I'm hoping to eventually incorporate collisions, force applied, how weight and momentum and different body shapes contribute to energy efficiency, and giving it all of the opponents inputs as its “sight," etc, which adds even more complexity to its inputs if it can also see how fast its opponents hands are moving, where they are relative to their face, etc. etc.
If anybody has recommended tutorials/walk-throughs of these, appreciate the links, thanks.