Yeah I'm already scaling samplecount by angle and distance (and offset by distance, you don't really see it, so that's great).
I've added clamping of the detail in the heightmap by massaging the mipmapping, and that's giving me a huge speed boost on large textures, since most of my textures are fairly smooth (like medieval brick and such). I'm doing it like this at the moment, and it works fine, but since I'm a shader noob perhaps there's a better way?
// Two helper functions...
float GetMipLevel(sampler2D tex, vec2 uv) {
return textureQueryLOD(tex, uv).y;
}
float GetMipLimit(sampler2D tex, float limit) {
// Get texture size in pixels, presume square texture (!).
float size = textureSize(tex, 0).x;
// Convert to power-of-two to get number of mipmaps.
size = log2(size);
// mipmap 0 = nearest and largest sized texture. Get the
// smallest required mip-offset to avoid large textures.
if (limit < size) {
return size - limit;
} else {
return size;
}
}
// Then inside the parallax function, but outside the loop...
// Limit heightmap detail.
float mipLimit = GetMipLimit(tex, 7);
float mipLevel = GetMipLevel(tex, uv);
float mipLod = max(mipLevel, mipLimit);
// And sample inside the loop...
textureLod(tex, uv, mipLod);
Yeah, the hierarchical traversal doesn't seem to be worth it in practice, shame really. Maybe worth it for soft shadows, the QDM paper seems to have an interesting approximation for shadowing.
Another interesting thing I read was in the Cone Step Mapping paper, where he ditches the normals and instead uses vertical/horizontal derivatives, that allows him to trivially scale the normals alongside the height. Generating the derivative textures could also be crazy fast I think... perhaps even worth doing that at load/async and shipping only with a heightmap. Seems kinda neat, but I'm not sure how much you buy with that.
Thanks for the tips, I'll remember the BC4 unorm thing.