Advertisement

Issues with floating origin

Started by March 13, 2022 03:10 AM
58 comments, last by dragonmagi 2 years, 7 months ago

dragonmagi said:

@Gnollrunner “They certainly can solve it even with a 32 bit float GPU”

Well that is not double precision, is it? Can you describe an algorithm or design for what you mean?

I gave a pretty detailed description in my fist post on this thread. You go to single precision in the final step.

@Gnollrunner I find that description a bit hard to follow but it does sound similar to continuous floating origin. What is the largest continuous World extent that you use this method for?

I should say that I am not arguing against what you are saying.

I don't have anything against doubles, in fact all my work is intended to include doubles for what I call reference positions and reference maps (e.g. in the glossary I referenced) and perform the step down to single precision (called a floating origin subtraction in my thesis) in a controlled manner when the relative distance between the viewpoint and an object reduces. The calculation can be repeated as the distance reduces thus progressively refining the fidelity.

The only reason I focus on floats right now, excluding doubles from all the code, is to extract as much out of the float system as possible end ensure that I have all of the design and algorithms, that solve pretty much every issue, before adding doubles back in. Otherwise, a move to include doubles can hide an issue that would have been discovered by pushing floats to the max in a variety of situations.

By pushing floats to the max in as many ways as I can think of, I can also gain an understanding of different aspects of the resolution, interaction and jitter issues. Then I can isolate them, characterise them and design specific solutions.

Advertisement

@undefined Sure, do what works for you. I find that doubles make things easier in my code. But there is more than one way to skin a cat. I can support an earth size world and much larger with this. For non continuous, I can do a small galaxy.

dragonmagi said:

To give you a little more insight:

Whether the objects nearby are jittering or your your view point is jittering, interactive motion towards something can exhibit a loss of degrees of motion freedom in one or more axis direction. In the wandering tower example, I had travelled the greater distance just in the Z direction, and consequently lost motion freedom in the z direction. The proof of this is simply your motion speed. if you can't move at a very slow speed, you have lost too much resolution in the direction of movement. Increasing your speed will allow movement but you will jump past/through something you want to approach accurately.

Interesting. This explains something I noticed earlier. That is, when I assigned the exact coordinates directly to the floating origin offset, it took me to the correct spot because I hadn't done any movement yet. After I moved, even a bit, it would get messed up.

So does this dynamic resolution spaces thing solve the distant jitter issue? Assuming you have a paper on the subject, I'll see if I can implement something along those lines.

@dragonmagi Thanks for your contribution.

After reading the original Floating Origin paper, which i did not knew about before, i'd like to sum up to see if i understood it correctly.
My motivation is the hope i could eventually avoid a need to go double precision.

Your proposal is to do a inverse view transform, followed by a world transform. Then transforming all the objects, which at this point are close to the origin if close to the viewer.

Confusion on my side: Assuming we move the view slowly like in your videos, the inverse transform may contain large numbers. How can we still represent subtle camera movements, which means adding tiny to large numbers?

But the more important and similar question: Even if we solve rendering jitter, representing and modifying the world still has the precision issues on large coordinates. So if the Sun is the origin, i still can not make a pencil on Saturn move one millimeter.

Now, assuming i had a hierarchy of parent nodes between Saturn and Sun, so the pencils local offset to it's parent is small, would it work then? Would the hierarchy allow to ‘sum up precision’ in a way i can get practically infinite precision when ‘zooming in’ to a region of interest? Of course i would need to move my view close to the pencil so i can see the precision is there.
But even without any rendering, i could change the local coordinates of the pencil. And after that, even if doing all the transformations only in single precision, i could see the subtle changes made to the pencil?

\:O/ I think this should actually work!?!

I'm kinda baffled, so i guess my questions sound pretty confusing. But maybe you're experienced with this kind of peoples confusion, and can confirm regardless… :D

@JoeJ It does work. I just set it up, and I ended up exactly where I was supposed to be, all using single floating point values. Furthermore, this seems a lot like the Dynamic Resolution Spaces that @dragonmagi mentioned.

Advertisement

@Tape_Worm DRS solved the distant relative jitter issue, yes.

I have to admit, this has been an exercise in pure frustration. So, after I implemented the system that @joej described, I found that I could no longer accurately measure the distance travelled. At least, it was coming back incorrectly - this could still be something utterly idiotic that I've done - it shouldn't be due to floating point error as I'm using a decimal value to accumulate the distance travelled. Again, I'm sure this is my fault somehow.

Would someone just shoot me in the head now please?

@dragonmagi Is the DRS system anything resembling what @joej described? Based on what I've read in your papers it seems that way, but I don't know for certain (also, probably just me being dumb).

JoeJ said:

@dragonmagi Thanks for your contribution.\@dragonmagi

No problem, thanks for reading my paper ?

After reading the original Floating Origin paper, which i did not knew about before, i'd like to sum up to see if i understood it correctly.
My motivation is the hope i could eventually avoid a need to go double precision.

The definitive proof that solely floats can be used is demonstrated with the “A Sunday drive to Pluto-Charon” video. I did it to stress test, and break, the float only approach. Well I was able to break it but also make it work with additional code, speed and targeting controls etc. I put it online (video and asset) to show that full scale float solutions with continuous Solar system is possible. I believe that is a World first.

After doing that, I started putting up small components: Continuous Floating Origin and DRS etc, that took essential algorithms for the larger asset, improved on them and added a lot of additional capability. These will be further developed, then used to rebuild the original Relative Space Framework.

Your proposal is to do a inverse view transform, followed by a world transform. Then transforming all the objects, which at this point are close to the origin if close to the viewer.

My approach is to take the user navigation inputs and reverse transform the world instead of moving the view from the origin. I have a single transform over the world and apply the reverse transform to that. I never move all the objects and lights, physics objects and actors like the fake floating origin code does. That's really inefficient.

Confusion on my side: Assuming we move the view slowly like in your videos, the inverse transform may contain large numbers. How can we still represent subtle camera movements, which means adding tiny to large numbers?

Correct. That is not a problem for the Zero centred avatar and any objects that are stationary with in (UI, etc).

Your question is an important one because I get it often. Most of the time, it is some “expert” who states something similar as if “there, that proves your code wont work and you need doubles etc”. It is a misconception.

All objects outward from the avatar that are under the World transform are ok for travel up to about 70,000m.

After that you start to get distant relative jitter: objects connected to the hierarch under the World transform will jitter, more noticeably the closer you relative move to them. Dynamic resolutions spaced deals with that.

But the more important and similar question: Even if we solve rendering jitter, representing and modifying the world still has the precision issues on large coordinates. So if the Sun is the origin, i still can not make a pencil on Saturn move one millimeter.

True- if it is in the same space and math model as the avatar, but why would it be?

My answer has a few parts.

  1. In general, if there is no player nearby, no observer, then you don't bother doing anything. If Saturn is visible, you can even use an imposter sprite for it.
  2. Distant objects jitter but, because of perspective foreshortening, the jitter changes are divided by the distance and not visible to an origin-centred observer.
  3. As the centred observer approaches the distant objects, they jitter less and less and there is no problem.
  4. If you, for some reason need to be doing something that is in the Saturn reference map, then you do it in a mathematical model with its own origin where you need it.

Mathematical models.

To do what I described for Saturn, we cannot use any physics or other support that is tied to a rendering pipeline and behaviour pipeline. This is why I support Marcos Elias's call for the opening up of the Unity physics API. So physics and other behaviours would need to be done with separate math models of objects (as meshes, positions and properties) : ie what current physics does before rendering. We should get together build on an open source system that everyone can use.

Now, assuming i had a hierarchy of parent nodes between Saturn and Sun, so the pencils local offset to it's parent is small, would it work then? Would the hierarchy allow to ‘sum up precision’ in a way i can get practically infinite precision when ‘zooming in’ to a region of interest? Of course i would need to move my view close to the pencil so i can see the precision is there.
But even without any rendering, i could change the local coordinates of the pencil. And after that, even if doing all the transformations only in single precision, i could see the subtle changes made to the pencil?

Ah, yes, you are already thinking what I just wrote ?

\:O/ I think this should actually work!?!

I'm kinda baffled, so i guess my questions sound pretty confusing. But maybe you're experienced with this kind of peoples confusion, and can confirm regardless… :D

Nope, these are right on the ball and exactly what I had, and began solving, a long while back.

There is still scope for more pieces to the solution, things I have not documented yet.

@Tape_Worm @dragonmagi Is the DRS system anything resembling what @joej described? Based on what I've read in your papers it seems that way, but I don't know for certain (also, probably just me being dumb).

No, there are differences.

This topic is closed to new replies.

Advertisement