Skeletal Animation - Basics
From my understanding, there are simply different joints with a relative position to each other. There is one 'root' joint for the universe, which is basically the universe' origin. Then there is a hierarchy where every joint has a distance from that parent joint, and an angle. Every joint could have multiple child joints, and has limits on how big the angle of the child joints can be.
Is my understanding correct? It seems very computationally intensive to calculate such a thing.
Usually the 'root' joint is something each skeleton will have, rather than there being one 'root' for the entire universe (which presumably contains multiple models/skeletons/etc). Then, each other joint will have a parent, and its transform will be defined relative to its parent root, as you described.
Assuming your joints are in a big array that is sorted based on the parent child relationships, with parents coming before any of their children in the array (this is a classic topographical sorting of a direct acyclic graph, which is what a skeleton usually is), computing each joint's worldspace position will just be one matrix multiply (ChildMtx_World = ParentMtx_World * ChildMtx_Relative). So that's not too computationally expensive.
The "limits on how big the angle" can be between two links is something that isn't a necessary component of a skeletal animation system. Those limits are used in constraint based solvers (like inverse kinematics or ragdoll physics, etc). There are a number of techniques for solving these constrained systems (which always basically end up being big linear algebra problems), and while they aren't cheap, they're definitely feasible in realtime.
This article describes the basics of skeletal hierarchy and animation. This article, although it's primarily about blending animations, should provide some additional insight.
With regard to being computationally intensive: if all animations were determined and calculated real-time, it still wouldn't be horribly bad. Most animations, such as running, walking, aiming, etc., are fleshed out in a modeling program (such as Blender). The data necessary to form the skeletal hierarchy and animate the skinned mesh real-time are then imported into an application. I.e., determination of angles and associated animation data is done off-line.
It's still an option to determine real-time animations, commonly using IK (inverse kinematics), to animate a character (for instance) reaching for an object, or aiming a weapon.
Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.
You don't forget how to play when you grow old; you grow old when you forget how to play.