For example, in a CollisionSystem, if two entities have a CollisionComponent and collide, the system would fire off an event into the wild saying so. How can you efficiently guarantee the the responses to this event are fired in the right order?
The real questions should perhaps be: What other sub-systems are addressed by the event and what meaning have the responses w.r.t. the collision system, and … what responsibility has the here mentioned collision system in its own?
Well, the movement sub-system wants collision detection (and correction) to avoid moving avatars bumping into one another. The damage sub-system, on the other hand, wants collision detection (without correction) to determine damage happening. The visual sense sub-system wants collision detection to determine whether a foe is in the viewing cone of an AI agent (to set the alert mode). And so on, and so on. Hence ...
a) the subsets of entities that are suitable for the one or the other kind of collision detection vary with the sub-system that is interested in such collisions;
b) already the different meanings of actual collisions introduce an order in which collisions should be detected (e.g. first movement, then damage);
c) the collision volumes are not necessarily the same for the different kinds of collision detection;
d) how should a generic collision detection be responsible for movement, AI, damage, and so on;
e) what leads (IMHO) to the conclusion that collision detection in itself is a service used by other (higher order) sub-systems,
f) and that such sub-system should not be processed in an arbitrary order anyway.
The ideas "having a sub-system per component" and "each sub-system has an update()" should not be treated as fundamental; each tool should be used where appropriate and should be replaced by something else where it is not appropriate. Even in case of collision detection it may be used (more or less strict) as soon as the generic collision volume component is dropped in favor of more specific components. For example, an entity may have a VisualSense component which means the AI has to use it. The component is parametrized with a viewing cone definition, because the underlying mechanic is based on collision detection. When that component is enabled, the entity has a role of a detector in the visual sense sub-system. The entity may also provide a VisualStimulus component. This one is driven e.g. by the movement sub-system, e.g. the faster the movement the stronger the stimulus. A mandatory parameter of the VisualStimulus is a volume for collision detection (which may but need not be the same as the volume for bumping avoidance). So the entity plays also the role of a detectable within the visual sense sub-system in case that the component is enabled.
You can still define collision volumes by using CollisionVolume components, if you wish to, but it has no meaning in itself. First its use as parameter to another component sets it into a meaningful context (a bit like data and behavior components). Well, this is the way I've chosen. It is of course not the only one.