I remember when I was programming my 3D engine in 2015 or so, that I needed huge coordinates for my game (and needed double precision for that), but had to make them relative to the camera so that they would fit into a single precision floating point value so that the GPU would be able to handle them quickly.
I also remember a friend creating a fractal shader around that same time which ran 10x faster on single precision than double precision (although the latter would look better).
Double precision was already available on the GPU, but it was terribly slow.
I did some searches on the internet and can't find any useful information on how it progressed.
How are GPU's with double precision today? On attributes? On textures? Is is still so much slower than single precision? Is it still better to do the ‘relative to camera’ thing on the CPU than to simply pass double precision values to the GPU?
I understand that the memory footprint will be twice as big and twice as slow to pass through, but I guess memory has become more than twice as fast since then, which would scale out the difference unless you're creating an AAA game where every extra instruction counts.