JoeJ said:
@dragonmagi Thanks for your contribution.\@dragonmagi
No problem, thanks for reading my paper ?
After reading the original Floating Origin paper, which i did not knew about before, i'd like to sum up to see if i understood it correctly.
My motivation is the hope i could eventually avoid a need to go double precision.
The definitive proof that solely floats can be used is demonstrated with the “A Sunday drive to Pluto-Charon” video. I did it to stress test, and break, the float only approach. Well I was able to break it but also make it work with additional code, speed and targeting controls etc. I put it online (video and asset) to show that full scale float solutions with continuous Solar system is possible. I believe that is a World first.
After doing that, I started putting up small components: Continuous Floating Origin and DRS etc, that took essential algorithms for the larger asset, improved on them and added a lot of additional capability. These will be further developed, then used to rebuild the original Relative Space Framework.
Your proposal is to do a inverse view transform, followed by a world transform. Then transforming all the objects, which at this point are close to the origin if close to the viewer.
My approach is to take the user navigation inputs and reverse transform the world instead of moving the view from the origin. I have a single transform over the world and apply the reverse transform to that. I never move all the objects and lights, physics objects and actors like the fake floating origin code does. That's really inefficient.
Confusion on my side: Assuming we move the view slowly like in your videos, the inverse transform may contain large numbers. How can we still represent subtle camera movements, which means adding tiny to large numbers?
Correct. That is not a problem for the Zero centred avatar and any objects that are stationary with in (UI, etc).
Your question is an important one because I get it often. Most of the time, it is some “expert” who states something similar as if “there, that proves your code wont work and you need doubles etc”. It is a misconception.
All objects outward from the avatar that are under the World transform are ok for travel up to about 70,000m.
After that you start to get distant relative jitter: objects connected to the hierarch under the World transform will jitter, more noticeably the closer you relative move to them. Dynamic resolutions spaced deals with that.
But the more important and similar question: Even if we solve rendering jitter, representing and modifying the world still has the precision issues on large coordinates. So if the Sun is the origin, i still can not make a pencil on Saturn move one millimeter.
True- if it is in the same space and math model as the avatar, but why would it be?
My answer has a few parts.
- In general, if there is no player nearby, no observer, then you don't bother doing anything. If Saturn is visible, you can even use an imposter sprite for it.
- Distant objects jitter but, because of perspective foreshortening, the jitter changes are divided by the distance and not visible to an origin-centred observer.
- As the centred observer approaches the distant objects, they jitter less and less and there is no problem.
- If you, for some reason need to be doing something that is in the Saturn reference map, then you do it in a mathematical model with its own origin where you need it.
Mathematical models.
To do what I described for Saturn, we cannot use any physics or other support that is tied to a rendering pipeline and behaviour pipeline. This is why I support Marcos Elias's call for the opening up of the Unity physics API. So physics and other behaviours would need to be done with separate math models of objects (as meshes, positions and properties) : ie what current physics does before rendering. We should get together build on an open source system that everyone can use.
Now, assuming i had a hierarchy of parent nodes between Saturn and Sun, so the pencils local offset to it's parent is small, would it work then? Would the hierarchy allow to ‘sum up precision’ in a way i can get practically infinite precision when ‘zooming in’ to a region of interest? Of course i would need to move my view close to the pencil so i can see the precision is there.
But even without any rendering, i could change the local coordinates of the pencil. And after that, even if doing all the transformations only in single precision, i could see the subtle changes made to the pencil?
Ah, yes, you are already thinking what I just wrote ?
\:O/ I think this should actually work!?!
I'm kinda baffled, so i guess my questions sound pretty confusing. But maybe you're experienced with this kind of peoples confusion, and can confirm regardless… :D
Nope, these are right on the ball and exactly what I had, and began solving, a long while back.
There is still scope for more pieces to the solution, things I have not documented yet.