Frankly a loss of precision on these scales is a non-issue. I knew that much before I started this thread :). That being said, on the one hand that's really not the point of the discussion and on the other hand I hadn't considered directly casting stuff to doubles, which on closer inspection results in far less error than I initially figured, considerably alleviating the problem without need for much further thought.
However... I did do some further calculations to see what the limitations would be if I wanted to encapsulate the entire observable universe and calculate distances at a precision that would be absolutely minuscule with respect to the numbers involved. And I think I've figured out a somewhat reasonable, mathematically sound scheme to do so, which is in no shape or form overly fascinating. Or useful. In fact, without changing the above data structure AFAICT distances can be calculated to roughly within one micrometer across the entire observable universe, while only accepting some - though arguably considerable - waste of storage space. All of which is something every science nerd needs more of. Right?
tl;dr: please stop reading now. I'm way overthinking this.
In case the tl;dr didn't discourage you, here's some facts and presumptions:
1) the size (diameter) of the observable universe is 93 billion light years.
1.1) I was initially thinking of using 1 AU (astronomical unit) as the cutoff distance, but I wanted to be able to store a normal-sized solar system within a single precision range, which an AU cannot do
2) speed and storage are not really issues
3) however, I don't want to use arbitrary precision math. Eg everything should stay within the pipeline a regular 64-bit CPU can handle
4) the biggest presumption here is that my math is actually correct. Which it may very well not be.
And here's the rundown:
a) assume overflows are unacceptable and it is undesirable to make use of the upper parts of either double precision floating point or some spatially non-symmetrical fixed-point distance metric. The maximum squared distance between two 64-bit coordinates located within a cube hence becomes:
x2+x2+x2 = 3*x2 = 263 = 9223372036854775808 km
Which resolves to:
x2 = 3074457345618258602.6(6) km =>
x = 1753413056.1902003251168019428139 km, or =>
x = 5.6824247180601 pc
Which is ~200 times less than the kiloparsec range I was initially trying to stretch the global coordinates to. No worries. Let's just reduce the global scale to the range [1 km; 1 pc].
b) with this there's enough precision to calculate distances directly without fear of overflow. The precision cutoff point of 1 full unit in double precision floating point format (eg where the precision dips below one full unit) is around 1e+20, which is notably ~10x larger than than the 9223372036854775808 km upper limit for the squared distance. Which is actually a pretty nice fit.
c) the bigger problem here is wasted storage space. Using 64 bits to store distances from 1 km to 1 kiloparsec nets a whopping log2(974932) - log2(1000) = 9.9291577864 bits of wasted space per axis. Reducing the upper limit to one parsec bumps this up to 19.8949420711 bits.
Which is a total of ~60 bits of unused space when storing a single coordinate. However, that's not all.
d) the same logic can be applied to the intergalactic scope, which is also a 64-bit integer 3-vector, boosting the amount of wasted space to around 15 bytes per coordinate. Which is a LOT.
e) that being said, using 44 bits of precision per axis on the intergalactic on top of the 1 parsec local scale amounts to a maximum universe size of (2^44 * 3.26) ly / 93000000000 ly = ~616x the size of the observable universe.
Success! Right? Well, yeah, when not considering each coordinate wastes more space than is used by a full blown 3-vector used for rendering.
There are a couple of upshots to this, however:
a) the extra bits can optionally be used to store rotational or other intrinsic data, such as brightness and/or color.
b) assuming most of the universe is procedurally generated and can hence be quickly discarded on demand, the number of objects that need to be stored with extended precision at any one time is actually relatively small. Likely in the upper hundreds or lower thousands. Which doesn't really amount to too much wasted space in the end.
So - voila. Here's to 2 hours well spent! Because SCIENCE!
Incidentally, if anyone's bored, please feel free to check my math.