programmers love to abuse big-O notation to mean whatever they feel it should mean on any particular day, so I guess it doesn't really matter
Found the math major!
Probably the funniest of the rebuttals to Bacterius.
But please, remember that computer science is applied mathematics. Big-O notation is a defined thing, and mathematically it does have meaning. The CS department definition is slightly different from the mathematics department definition. Bacterius is right about that.
In math, there are formal papers that define the notation that were in place in 1892 and still mean those same things, descriptions of upper and lower bounds of certain problems. In CS the meaning is more general, it is the simplified asymptotic behavior of a function, usually expressed as a complexity family.
In the CS world, O(0) has basically what Bacterius said, the notation is a simplified asymptotic bounds of the complexity of the algorithm, representing the order of growth. Big O represents the upper bound asymptote, little o represents teh lower bound asymptote.
O(1) means it is a constant complexity for any input size, tending to neither increase or decrease. A task like negating a number (e.g. 1 returns -1, 50 returns -50, -100 returns 100) has constant complexity, it is equally complex no matter the input, so it is O(1).
Other tasks have asymptotic behavior toward logarithmic, polynomial, factorial, or other simplified representations.
O(0) would mean a complexity that tends toward zero for large input sizes. It may still take a ridiculously large amount of clock time, but you know that if you provide larger numbers tending out to infinity, asymptotically the complexity is continually decreasing. The function -- whatever it is -- becomes progressively less complex to solve the bigger the input, asymptotically reaching no complexity at all.