dragonmagi said:
No, there are differences.
Care to help a brother out? It'd be nice to know what I'm missing.
dragonmagi said:
No, there are differences.
Care to help a brother out? It'd be nice to know what I'm missing.
Gnollrunner said:
@undefined Sure, do what works for you. I find that doubles make things easier in my code. But there is more than one way to skin a cat. I can support an earth size world and much larger with this. For non continuous, I can do a small galaxy.
I should say by continuous, I mean continuous land. Resolution wise, you should be able to do millimeter resolution all the way out to the orbit of Pluto by my calculations. The bigger problem for worlds, is storing the data.
JoeJ said:
JoeJ said:
And i don't want to write a templated replacement, because it would be slower a bit.Just discussed this with the dev of Newton Physics, which i'm using. Newton math lib has both float and double SIMD optimizations, so i can use just use that and my search is over.
But the interesting thing is he also tried a templated approach, which gave him the same issues i saw with GLM: MSVC fails to generate good SIMD code form it, and constructors are big issue needing a lot of special care. He ended up not using templates.
It depends what you need. My library is very old, like from to 90s. It has a few optimizations like for most calculations it only used a 4X3 matrix since projection is rare. I can upgrade the key routines over time to use SIMD. I've been waiting until I get an AVX2 computer which I just got the parts for (that was a bit of a scramble since I'm in Russia). In my case it probably won't make so much difference however, since that's not where the major calculations are. But when I do simplex noise in SIMD I might get a big speed up. In any case I'm still going to claim writing a few matrix routines in double isn't very hard no matter how you do it.
@Gnollrunner I should say by continuous, I mean continuous land. Resolution wise, you should be able to do millimeter resolution all the way out to the orbit of Pluto by my calculations. The bigger problem for worlds, is storing the data.
That's right, that's exactly what continuous floating origin does.
Care to help a brother out? It'd be nice to know what I'm missing.
Honestly, beyond the information i have given, the best way for me to help is for you to buy at least the CFO asset. It cost almost nothing, has C# code implementations, documentation, demo code. I can't explain it all in detail here.
@dragonmagi Thanks again for the explanation, i really found enlightenment here : )
Seeing the communication is difficult in both ways, here's what's the fruitful trail of thoughts in my specific case:
Problem: How can we represent a huge world while still having enough precision to model small scale detail?
Solution: Divide the world into a hierarchy of parenting spaces, and represent the detail in local coordinate frames, which always have enough precision.
While resolving the hierarchy of parent transforms, any precision issue coming from large numbers gets focused to the single spot of a parent transform, and is then shared over the whole child branch of sub spaces. Thus no relative jitter across nearby objects once we view them.
A neighboring space may have a slightly different error than the current one we are just in, but it won't be a problem in practice, since both adjacent spaces have similar numbers in their chain of parent transforms.
That's what matters to me. Visualization seems the smaller problem, so i needed to understand this first. I mean it's just obvious, and likely we do this anyway, but i did not really realize the trick, or which problems we just solved.
(I hope i'm right at all. Maybe that's what you address with resolution spaces. Will read… )
There's a creep problem.
entity.Transform.LocalPosition = new Vector3(entity.Transform.LocalPosition.X - _offset.X, entity.Transform.LocalPosition.Y - _offset.Y, entity.Transform.LocalPosition.Z);
}
Each time you do that, you introduce some round-off error into the transform. Gradually, this error can accumulate in your transformation matrices.
If you always move the origin by an integer distance, that ought to put an relatively small upper bound on the creep. I think.
That is, if you have a floating point number, and you do something like this, which corresponds to moving the origin and moving it back,
for (i=0; i<10000; i++) {
x + = 100;
x-= 100;
}
there should be some round off error, but I think it will be no worse than the error for one iteration. Is that correct? Does rounding mode matter?
JoeJ said:
@dragonmagi Thanks again for the explanation, i really found enlightenment here : )
Thank you, but do try not to float into space just yet, you need the algorithms sorted first!
Seeing the communication is difficult in both ways, here's what's the fruitful trail of thoughts in my specific case:
Problem: How can we represent a huge world while still having enough precision to model small scale detail?
Yes, and to have fine fidelity motion, calculations and interaction wherever it is needed.
Solution: Divide the world into a hierarchy of parenting spaces, and represent the detail in local coordinate frames, which always have enough precision.
Thats fine.
While resolving the hierarchy of parent transforms, any precision issue coming from large numbers gets focused to the single spot of a parent transform, and is then shared over the whole child branch of sub spaces. Thus no relative jitter across nearby objects once we view them.
Correct, and you do get distant relative jitter (which may or may not be noticed) transmitted down from the large parent coordinates. this is entirely separate from the normal jitter that is solved via CFO. And that is where the dynamic resolution spaces comes in.
A neighboring space may have a slightly different error than the current one we are just in, but it won't be a problem in practice, since both adjacent spaces have similar numbers in their chain of parent transforms.
These assumptions will not hold true for a dynamic system. You could start of with making it that way but everything will change.
That's what matters to me. Visualization seems the smaller problem, so i needed to understand this first. I mean it's just obvious, and likely we do this anyway, but i did not really realize the trick, or which problems we just solved.
(I hope i'm right at all. Maybe that's what you address with resolution spaces. Will read… )
Maybe that's what you address with resolution spaces.
That's it!
Nagle said:
There's a creep problem.
entity.Transform.LocalPosition = new Vector3(entity.Transform.LocalPosition.X - _offset.X, entity.Transform.LocalPosition.Y - _offset.Y, entity.Transform.LocalPosition.Z); }
Each time you do that, you introduce some round-off error into the transform. Gradually, this error can accumulate in your transformation matrices.
true, it is very creepy so don't do it!
There are additional issues here. you don't want to do new often during runtime (frame-by frame code). Memory allocation is slow. Iterating over all entities, physics etc, as was in the old Unity wiki fake floating origin (shifty algorithm) each move is slow. Just do one vector add to the top level parent(s) instead. The graphics pipeline will move everything for you because it combines the transforms of the entire World before rendering each frame anyway.
And yes there is still creep from the transform operation. One can look a the use cases and find some for which this is irrelevant (and is cancelled out) and some for which it may be relevant and you need to do some corrections.
If you always move the origin by an integer distance, that ought to put an relatively small upper bound on the creep. I think.
That is, if you have a floating point number, and you do something like this, which corresponds to moving the origin and moving it back,
for (i=0; i<10000; i++) { x + = 100; x-= 100; }
there should be some round off error, but I think it will be no worse than the error for one iteration. Is that correct? Does rounding mode matter?
Um, I prefer to stay with floats throughout. but by all means test out your ideas and see.
Performance comment.
So I made a performance comment to Nagle's post and I should expand on this a bit.
One of the really big myths (or deliberate misinformation) about the continuous floating origin idea was like:
“Moving the entire world of objects every frame is not performant”
and that was used to justify the shifty methods that only shifting things every threshold distance traveled.
However, those statements were obviously false to anyone who understood the fundamentals of computer graphics: if I move everything by adding a vector to the top transform the cost is of no measurable significance. The graphics pipeline automatically moves everything for me.
Furthermore, the top level transform cost is invariant to the number of entities in the World!
Contrast to the shifty solution: the cost goes up with the number of entities because they iterate over every one.
Invariance is one of the most important properties of any technology and my work aims to achieve invariance not just of performance to the number of entities but invariance of accuracy, fidelity and quality to World scale as well.