scruthut said:
For example one can have a graphics group where objects are analyzed for there z-axis position value and sort in z order if the orientation is correct for your perspective, then culled and dynamic tessilation depending on distance. Then they can be passed to a sequence object that renders the IRenderables. That is a simplification i hope you understand. However, one would want to perform these steps on a batch of objects though.
…
IStep interface in my engine inherits from IObject, a managed object, and I might make IDataResource inherit the same interface such as IResource, because I consider IStep interface an executional resource in my engine.
For parallel processing, the reason for doing it in parallel is paramount, and as quite as important as the algorithms you choose, often the same core issue.
In parallel processing, data mapping and process communications are usually the primary drivers. There is often a tremendous effort in discovering the limitations, the longest spans of sequential work, and minimizing the data dependencies between them. It's about building algorithms that allow actively using all your processors or memory space to solve the problem with parallel speedup. In your early example you've got a situation where the sorting plays a significant factor in the processing, you're looking at reordering work and basing communications between work units relative to the viewpoint, saving work by re-ordering versus the work that would be done if kept serial. This doesn't hold for what you wrote in your later posts where it is just being used as a generic placeholder for a unit of work rather than something that leverages parallel algorithms.
In games generally remember that your consumer-facing system needs to work just as well with 4 cores as it does with 64. Server-side systems are generally constrained to VM's and low logical CPU counts. We're not building or using supercomputers. As a result, often we build systems that are tasks based, and also build them up with priorities in several sets, the "must-process" tasks that get done first, the “nice to have” like fine-step processing, and the “because we can” like extremely detailed cloth simulations or whatever. Those aren't parallel processing algorithms, they're merely tasks that happen to be scheduled in parallel.
In game simulation processing, that is exactly why the concept of a task manager is powerful and common. It is rare to use algorithms that rely on parallel processing for computational benefits, outside of a few specialty cases like hardware-accelerated physics or cook-time mesh refinements. We rarely need tasks like parallel large sparse matrix manipulation, parallel searching of enormous data sets, parallel all-pairs processing, or others that see significant benefit by parallel algorithms. However, it is quite common to build up huge collections of tasks as small bundles of work, where the small independent tasks can be scheduled or ordered in no particular way with no particular benefit other than being more convenient to schedule, and exposing the opportunity for optional processing tasks on hardware that has many cycles to spare.