15 hours ago, Lactose said:Could you elaborate on this one?
Sure, I'll give it a shot. Different training systems did different things, but they generally had multiple elements involved in finalizing a student's score.
Let's say you're training a tank crew. During one training exercise you're going to track the equivalent of an accuracy score on targets destroyed, how long it took to do different maneuvers, identify targets, and so on. That specific exercise may have scoring requirements specific to the training intent of the exercise.
Typically these systems also have an instructor (although we experimented with self-instruction), and from their instructor seat (Instructor Operator Station, or IOS), they are going to annotate the score with their own observations for what the crew did. The Instructor gets a fully relayed view of each crew's views and typically some type of "God" view.
The individuals in the crew may be scored independently as crew members as well as a collective crew.
And over the course of a full curriculum, your next exercise progression may depend on how well you score in prior exercises. Gates are defined and must-pass, but your score from the first 3 exercises may impact the type of training you receive in the 4th exercise.
It becomes a fun multi-dimensional matrix where scoring isn't just what you did individually during runtime, but also how your team members did, how the instructor thinks you did, how you did in previous exercises, and all that may determine what you do in the future.
A lot of this was driven by customer requirements, but some of the more complex areas of scoring were our own experimentation to improve retention and training time, which ended up being successful.
The engine/tools we developed allowed a fair amount of this at the exercise level to be implemented through visual scripting / equivalent of blueprints. This allowed non-engineer domain experts - the training Subject Matter Experts (SMEs) - to define training directly using their expertise, instead of relying on translating knowledge in their domain into requirements that a software engineer could understand and then implement.