Advertisement

Is O(0) equal to O(1) ?

Started by July 10, 2015 08:29 AM
35 comments, last by cowsarenotevil 9 years, 2 months ago

Without a rigorous computational model (what it means for an algorithm or a piece of code to "compute" or "take time") the whole discussion is moot and saying that something "has time complexity O(0)" is handwavy and/or meaningless, whereas we can rigorously say that a function (in the mathematical sense) is or is not in O(0).

Is it? If you are discussing time complexity in terms of O(...) how can O(0) be any more moot than O(1)? The math is as you say rigorously well defined, even if the computational model not always is. Elided code (e.g. inlined empty function body) can be meaningful from a higher perspective but should reasonably have time complexity O(0).

openwar - the real-time tactical war-game platform

Elided code (e.g. inlined empty function body) can be meaningful from a higher perspective but should reasonably have time complexity O(0).

Call them 10^10 times one after another then and see wheather it will take no time- or never cross unzero atomic established unit over any count of them . On a determined machine, no operation can come time free, it would be metaphysical.

Advertisement


programmers love to abuse big-O notation to mean whatever they feel it should mean on any particular day, so I guess it doesn't really matter
Found the math major!

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

O(1) encapsulates many functions, such as 68/x


Some older 8 bit platforms didn't have a native divide operation, division was done by repeated subtraction. In this case division becomes O(n). I'll just leave this spanner here, in these works... :lol:

If you were clever you would have used shifts and subtractions to get the division done in O(log n).wink.png

Some older 8 bit platforms didn't have a native divide operation, division was done by repeated subtraction. In this case division becomes O(n). I'll just leave this spanner here, in these works... laugh.png

Yesm division is unlinear operation, that is possible to introduce only on certain numbers

Advertisement

programmers love to abuse big-O notation to mean whatever they feel it should mean on any particular day, so I guess it doesn't really matter

Found the math major!

Probably the funniest of the rebuttals to Bacterius.

But please, remember that computer science is applied mathematics. Big-O notation is a defined thing, and mathematically it does have meaning. The CS department definition is slightly different from the mathematics department definition. Bacterius is right about that.

In math, there are formal papers that define the notation that were in place in 1892 and still mean those same things, descriptions of upper and lower bounds of certain problems. In CS the meaning is more general, it is the simplified asymptotic behavior of a function, usually expressed as a complexity family.

In the CS world, O(0) has basically what Bacterius said, the notation is a simplified asymptotic bounds of the complexity of the algorithm, representing the order of growth. Big O represents the upper bound asymptote, little o represents teh lower bound asymptote.

O(1) means it is a constant complexity for any input size, tending to neither increase or decrease. A task like negating a number (e.g. 1 returns -1, 50 returns -50, -100 returns 100) has constant complexity, it is equally complex no matter the input, so it is O(1).

Other tasks have asymptotic behavior toward logarithmic, polynomial, factorial, or other simplified representations.

O(0) would mean a complexity that tends toward zero for large input sizes. It may still take a ridiculously large amount of clock time, but you know that if you provide larger numbers tending out to infinity, asymptotically the complexity is continually decreasing. The function -- whatever it is -- becomes progressively less complex to solve the bigger the input, asymptotically reaching no complexity at all.
Some older 8 bit platforms didn't have a native divide operation, division was done by repeated subtraction. In this case division becomes O(n). I'll just leave this spanner here, in these works... laugh.png

Is that really the case, though?

If something is O(N) but you know that N does not approach infinity, but at most, say, 20... is that really O(N)? To me it's O(1).

An 8-bit CPU usually surely won't be able to handle something much bigger than 16-bit integers by combining two registers, but let's assume it can do 32 bits -- that is still a very finite number of steps. Yes, they are a non-constant number of steps for different inputs, but the upper bound is constant.


O(0) would mean a complexity that tends toward zero for large input sizes. It may still take a ridiculously large amount of clock time, but you know that if you provide larger numbers tending out to infinity, asymptotically the complexity is continually decreasing. The function -- whatever it is -- becomes progressively less complex to solve the bigger the input, asymptotically reaching no complexity at all.
Funnily, that would mean that many cryptographic attacks have O(0) complexity, as they tend to get easier the more plain/ciphertext pairs or known plaintexts you have. :)

If something is O(N) but you know that N does not approach infinity, but at most, say, 20... is that really O(N)? To me it's O(1).

Yes, up to a point.

I think it would be wrong to say that a bubble sort algorithm has a constant complexity just because it is only called with arrays of up to 10 elements in size.

However, I think it is fair that when trying to determine the overall complexity of a larger unit composed of lots of smaller algorithms, you could then treat the "up to 10 element bubble sort", as a kind of O(1) black box.

This topic is closed to new replies.

Advertisement