That depends a little on which hardware you are on. In general things like root constants are pre-loaded into SGPRs and hence there is no indirection in the shader, but there is a limit to this after which things get 'promoted' to other memory. Some hardware has more flexibility in what types of things it can access through raw pointers vs descriptors (e.g. some unified memory addressing consoles have a lot more flexibility)
Also on some older GPUs NVIDIA it seems to want to store frequently accessed constant buffer data earlier, because there is seemingly some special hardware to prefetch that (I couldn't find the link off hand, but you can google that.) On much newer hardware that stuff seems to matter less. (A good anecdote is some project I did where using a large buffer descriptor was 2x slower on the GPU than using dedicated constant buffers for a Tegra X1, whereas on a 1080 GTX there was near 0 difference). That said I haven't seen similar benefits on AMD hardware, so YMMV.
Quote
I ask because it seems like lining up contiguous tables in the descriptor heap to reduce cache misses could be useful, but only if that indirection doesn't happen per-thread.
Just to be clear that a cache miss on the GPU isn't terrible per definition. As long as you can hide the latency (similar to texture fetch) with another warp/wavefront then you don't notice it too much. In my experience it is generally better keeping VGPR pressure lower... not to say there is no benefit in aligning the descriptors better... but VGPRs usage translates better to a larger subset of hardware, where descriptor layout seems to be more finicky per hardware. And that is totally cool to spend a lot of time if you're optimizing for PS4 or XBOX One, but that's not so cool if you are targeting the PC/Mobile market.
Quote
Does anyone know how the GPU's scheduler uses the descriptors?
I find that AMD is particularly open about this, but NVIDIA is fairly secretive about this. If anyone does find good links about this, please post as I would like to know more about this.