There are older papers available that mention depth impostors. But I've never seen a video of one. Does anyone use them? How do they look?
A depth impostor is a billboard impostor that has depth info for each pixel. Each pixel thus has a position in 3D space. So when it's composited into the output image, that position is transformed into screen depth space and used for Z-buffering. This makes impostors depth sort properly, and the GPU does most of the work.
It's not a perfect illusion. If you're perpendicular to the billboard, it's perfect, and as you get off-axis but before you switch to the next billboard, there's going to be some parallax error.
This seems like a good idea that isn't used. I'm thinking of this as a technique for distant areas of big worlds. A city where each distant block is represented by one impostor, for example.
Anyone been down this road?
Ref: https://www.researchgate.net/profile/Michael_Wimmer4/publication/220853057_Point-Based_Impostors_for_Real-Time_Visualization/links/57565b8c08ae155a87b9d296/Point-Based-Impostors-for-Real-Time-Visualization.pdf