Scheduler behavior therefore becomes a property of algorithm itself. A scheduler which doesn't preserve strict order can be O(1). Even order-preserving scheduler can be O(1) at expense of memory. it's impractical as general purpose algorithm, but looks something like this:std::list<Process*> next_to_execute[MAX]; // next_to_execute(i) = list of processes to execute at time i
(...)
For me, stable comparison sort is a special case of the problem, one which most schedulers will not be able to achieve due to non-deterministic nature of threads. So might as well go for unstable non-comparison sort.
OK, sure. I guess the question is whether most schedulers actually handle sleep this way. I was under the impression that either heaps or sorted lists were far and away the most common, but I'll accept otherwise. And yes, I do understand the idea that the "circle of confusion," as it were, can be thought of as a bucket, and so...
Bucket sort is O(n). If original example can be shown to be equivalent, then it has same complexity.
[/quote]
I see what you mean. My only qualm with this argument is that it seems to assume the scheduler works "too well," and that, if we assume that the scheduler does something that makes it a bucket sort, why not go further and assume that the scheduler runs all of the threads in parallel in the most optimal way possible? If we do this, then we don't (ideally) have to worry about even looking at all of the numbers sequentially, and, as such, all of the O(n) terms drop out entirely. This means that we're left with the n processes, sleeping independently. This means that the "only" thing we have to worry about is the last thread finishing, which trivially runs in time proportional to the number it was assigned, that is, O(B).
In any case, I think we agree that if we assume we can always distinguish between numbers with arbitrary precision, then O(B) will always be at least O(n), so even the parallel version doesn't do anything good, but in any case, I think that O(B) is still the most general time complexity, and O(n log n) is something that will happen on many machines.