hmmm… just culling / merging the triangles that are smaller than a pixel and done?
I suppose you are right. If you aren't using any normal maps and just vertex normals , always having verts render less than a couple pixels apart so that interpolation isn't 1995 per-vertex lighting across lots of pixels.
However deciding what needs to be processed in a given frame could be complicated. In order to do merging a compute shader seems reasonable. You would need to know edge info and other things. I'm curious to really see how a really long rock asset works. It looks like it would have the same density across it in screen-space, so updating that mesh would take some time, unless they process the mesh in chunks no bigger than 2x2x2 or so. If a long asset would be processed every frame then that is pretty interesting.
However I'm still confused how 33 Million verts for one model: x,y,z,Nx,Ny,Nz → 792MB of data.