Advertisement

Is O(0) equal to O(1) ?

Started by July 10, 2015 08:29 AM
35 comments, last by cowsarenotevil 9 years, 3 months ago


I think it would be wrong to say that a bubble sort algorithm has a constant complexity just because it is only called with arrays of up to 10 elements in size.

Isn't that exactly how most 'optimal' quicksort algorithms actually work, though?

By performing an O(N^2) insertion sort on small enough input sets to consider them constant time?

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Sure, does that not fall under "trying to determine the overall complexity of a larger unit composed of lots of smaller algorithms"? :]
Advertisement

O(0) would mean a complexity that tends toward zero for large input sizes. It may still take a ridiculously large amount of clock time, but you know that if you provide larger numbers tending out to infinity, asymptotically the complexity is continually decreasing. The function -- whatever it is -- becomes progressively less complex to solve the bigger the input, asymptotically reaching no complexity at all.

Funnily, that would mean that many cryptographic attacks have O(0) complexity, as they tend to get easier the more plain/ciphertext pairs or known plaintexts you have. :)

... That isn't what algorithmic complexity means. What you mention is more akin to having an oracle source, or exchanging size/space for time.

No matter how many you've got, the algorithmic complexity remains polynomial or factorial or whatever complexity.


Funnily, that would mean that many cryptographic attacks have O(0) complexity, as they tend to get easier the more plain/ciphertext pairs or known plaintexts you have. smile.png

That would be O(1/x) complexity, and if you present an algorithm like that, you get a nobel price before the universe explode. It would be still more like subtraction of inverse algortihms like O(x*x) - O(knowciphersize*x) =O(x*x-knowciphersize*x) for example (realy illustrative)


O(0) would mean a complexity that tends toward zero for large input sizes. It may still take a ridiculously large amount of clock time, but you know that if you provide larger numbers tending out to infinity, asymptotically the complexity is continually decreasing. The function -- whatever it is -- becomes progressively less complex to solve the bigger the input, asymptotically reaching no complexity at all.

Not sure I agree with this at all.. that would be O(1). O(0) is only the empty algorithm. NOP is not O(0), an empty function that is actually a function is not O(0), ret 0 is not O(0), the only thing that is O(0) is nothing, the empty algorithm that is never started and never ends as it is nothing. Try to call it on the complete set of particles in the universe and it is never run, the algorithm is never started and never existed as it was erased on compile-time.

If one source contains an algorithm and another does not and they yield identical bytecode after compilation, then that algorithm is O(0).

Leaf-methods on trees where only branches have code can reasonably be described as O(0), if they are expected to be optimized out for example. Can't think of an example where it would make a difference for the final O(..) .. but I somehow suspect there might be some example where a sub-algorithm that is expected to removed would give a theoretical answer closer to reality if it was designated O(0) rather than O(1) in the analysis.


Not sure I agree with this at all.. that would be O(1). O(0) is only the empty algorithm. NOP is not O(0), an empty function that is actually a function is not O(0), ret 0 is not O(0), the only thing that is O(0) is nothing, the empty algorithm that is never started and never ends as it is nothing.

The fastest code is code that does not run (I forgot whom to cite)

Advertisement


O(0) would mean a complexity that tends toward zero for large input sizes. It may still take a ridiculously large amount of clock time, but you know that if you provide larger numbers tending out to infinity, asymptotically the complexity is continually decreasing. The function -- whatever it is -- becomes progressively less complex to solve the bigger the input, asymptotically reaching no complexity at all.

Not sure I agree with this at all.. that would be O(1). O(0) is only the empty algorithm. NOP is not O(0), an empty function that is actually a function is not O(0), ret 0 is not O(0), the only thing that is O(0) is nothing, the empty algorithm that is never started and never ends as it is nothing. Try to call it on the complete set of particles in the universe and it is never run, the algorithm is never started and never existed as it was erased on compile-time.

If one source contains an algorithm and another does not and they yield identical bytecode after compilation, then that algorithm is O(0).

Leaf-methods on trees where only branches have code can reasonably be described as O(0), if they are expected to be optimized out for example. Can't think of an example where it would make a difference for the final O(..) .. but I somehow suspect there might be some example where a sub-algorithm that is expected to removed would give a theoretical answer closer to reality if it was designated O(0) rather than O(1) in the analysis.

I'm going to disagree slightly with both of these. To me the most coherent definition of O(0), at least based on the definition of O I've always used (which is the same as Wikipedia's) is to say that f(x) is O(0) if and only if f(x) is the empty algorithm for values of x larger than some (finite) x0.

Similar to what frob is saying, but I don't think saying that a function tends toward having no complexity is quite strong enough, as it permits that the function f(x) may still have non-zero complexity for all values of x.

-~-The Cow of Darkness-~-

I've a feeling that if we ask siri about this, she will tell us we have no friends :(


I think it would be wrong to say that a bubble sort algorithm has a constant complexity just because it is only called with arrays of up to 10 elements in size.

However, I think it is fair that when trying to determine the overall complexity of a larger unit composed of lots of smaller algorithms, you could then treat the "up to 10 element bubble sort", as a kind of O(1) black box.
Well, that's just the thing with big-O. It is well known that you can hardly find an algorithm worse than Bubble sort. Nevertheless, Bubble Sort on at-most-10 elements does at most 45 compares (exactly 45 for 10 elements) and between 0 (sorted) and 45 swaps (exactly 45 for 10 elements reversely sorted). So, the upper bound of both the number of steps and the number of swaps is constant. Memory usage is constant (zero) as well. Thus, for that scenario, Bubble Sort is O(1) in every respect. It's still a pathetically bad algorithm, but it's formally O(1) by all means (if N is known to be small).

That's why big-O is so useless... it's a nice mathematical toy, but it (usually, most often) tells you next to nothing really useful.

Given N=10, Insertion Sort does 9 iterations (and 0 to 45 swaps) and it has the exact same complexity as Bubble Sort, doing only 1/5 the number of iterations.

However, due to moving around fewer elements, when dealing with reversely sorted or random elements, Bubble Sort indeed only performs 7-10% worse for N=10 than Insertion Sort on my machine if you count actual processor cycles. For an algorithm that is intuitively soooooooooo substantially worse that's surpsisingly good.

And believe it or not -- for laughs, I ran a quick benchmark for N = 10, 1000, and 10000. Turns out that Bubble Sort even outperforms Insertion Sort on random data for N=1000 by a factor of almost 2x. No you didn't read wrong. I assume that's because of a combination of more cache-friendly access pattern and moving around fewer elements.

Now I wonder if there is a particular size where it ouperforms introsort, too... almost tempted to try. Maybe this evening :)


O(0) would mean a complexity that tends toward zero for large input sizes. It may still take a ridiculously large amount of clock time, but you know that if you provide larger numbers tending out to infinity, asymptotically the complexity is continually decreasing. The function -- whatever it is -- becomes progressively less complex to solve the bigger the input, asymptotically reaching no complexity at all.

Not sure I agree with this at all.. that would be O(1). O(0) is only the empty algorithm. NOP is not O(0), an empty function that is actually a function is not O(0), ret 0 is not O(0), the only thing that is O(0) is nothing, the empty algorithm that is never started and never ends as it is nothing. Try to call it on the complete set of particles in the universe and it is never run, the algorithm is never started and never existed as it was erased on compile-time.

If one source contains an algorithm and another does not and they yield identical bytecode after compilation, then that algorithm is O(0).

Leaf-methods on trees where only branches have code can reasonably be described as O(0), if they are expected to be optimized out for example. Can't think of an example where it would make a difference for the final O(..) .. but I somehow suspect there might be some example where a sub-algorithm that is expected to removed would give a theoretical answer closer to reality if it was designated O(0) rather than O(1) in the analysis.

I'm going to disagree slightly with both of these. To me the most coherent definition of O(0), at least based on the definition of O I've always used (which is the same as Wikipedia's) is to say that f(x) is O(0) if and only if f(x) is the empty algorithm for values of x larger than some (finite) x0.

Similar to what frob is saying, but I don't think saying that a function tends toward having no complexity is quite strong enough, as it permits that the function f(x) may still have non-zero complexity for all values of x.

Why a special definition of O(0)? It works well with the ordinary one:

03f0892b09b6d78de9ced3fbedce20a2.png if and only if ebd6ac0cecd43f567e44f420d16cba4c.png

Set g(x) identical to zero and f(x) = 0 for all x >= x0. You could then either prove or define the empty algorithm to be precisely the algorithm that takes zero time. The empty algorithm doesn't depend on anything, and in particular it doesn't depend on x, so it would take no time regardless of input. No matter what angle you approach this the result should be the same.

openwar - the real-time tactical war-game platform

This topic is closed to new replies.

Advertisement