I've been reworking the shadow mapping in my engine and noticed some weird behavior. I tried changing the depth texture format between the 16, 24, 32, 32F formats, but don't observe any difference at all with regard to the amount of bias required to eliminate shadow acne. For instance, with a 24 bit format, I should theoretically need 256 times less bias than a 16 bit format, yet they behave the same in practice. This doesn't make sense to me, because a higher precision format should be a better representation of the depth from the light perspective, and therefore need less bias, in the same way that less bias is needed when the light's depth range is reduced.
I queried GL_TEXTURE_DEPTH_SIZE to make sure I am setting up the textures correctly. It returns the expected values. Is the driver or hardware implementation choosing a specific format and ignoring what I request? It seems like the only explanation, but then why does GL_TEXTURE_DEPTH_SIZE return the same precision I request? For my main framebuffer it seems to use the correct 24-bit depth that I request (using 16-bit depth there increases Z-fighting), but shadows behave differently.
Based on the bias required, it seems like I'm getting about 16-bit depth. For instance, with 13 meter depth range (first cascade), I would expect with 24 bits depth to have about 13/(2^24)=7.7e-7 meter precision, yet it seems like I need 1-2cm of bias. 16-bit depth should theoretically have 2.0e-4 precision, which is still much smaller than the bias that seems to be required.
Does anyone have any ideas for what could be going on?