Advertisement

Why are Picture in Picture scopes in games such an expensive effect? They seem like they could be easily optimised.

Started by July 23, 2024 10:49 AM
2 comments, last by aaandreeew 5 months ago

I apologise if this is not the type of question usually asked here. This is my first post. I appreciate your responses. I'll also say that I don't know that much about graphics programming, so I'd love to learn of something I haven't considered.

For people not familiar, some realistic first-person shooters depict looking through a magnified scope with a subviewport within the scope. This lets the scope be magnified while peripheral vision remains unchanged. This is called a Picture in Picture (PiP) scope. This is an alternative to simply magnifying the world viewport, which is seen in most shooters.

The explanation I've heard for why PiP is so expensive is that the scene has to be rendered twice. This makes sense, but there's one thing that I still wonder. PiP scopes are actually a special case of subviewports: the subview is (almost) inline with the main view but just zoomed in. So why does the scene need to be fully rendered twice? Surely everything before rasterisation in the rendering pipeline can just be reused from the main viewport. The projection of world objects to the screen would be identical, just scaled linearly according to the magnification. It seems that all the work besides actually sampling textures and filling in the screen has already been done.

Is it feasible to do something like this?

aaandreeew said:
The projection of world objects to the screen would be identical, just scaled linearly according to the magnification. It seems that all the work besides actually sampling textures and filling in the screen has already been done.

Well, two points:

First, “Sampling textures and filling the screen”, as well as any PP-effects, lighting etc… are the actually expensive parts. Fillrate, bandwidth, especially in 4k environments, are the real killers, not doing some transformations on a few vertices. It can be legit faster to use a 100k vertex model compared to a 4-vertex quad, if that means you don't have to use a texture (especially if that get's rid of transparency).

Second, it's actually hard to actually re-use what you describe. Vertex-calculations are also done in a shader, and shader can't easily just temporarily store result X and then continue doing A first and B second. Since you need to render to a different part of the screen, if not different texture, you need to change the state of the GPU in between and issue a different render. That again already contains a lot of the actual overhead of rendering.
There do exist certain techniques to try to reduce those overheads. You can actually stream vertices to a buffer, and use that buffer to render, say, a model, but that has it's own overhead and is unlikely to really be faster for just 2 passes/frame (correct me if I'm wrong since I haven't tested this myself). Second thing is, using geometry-shaders, you can actually render the same model twice to different render-targets (don't know if viewports are supported), using just one pass. This is used ie. for VR, to render both eyes at the same time; or for cascaded shadow maps. Still, the speedup here is not half of what you'd otherwise pay, as, like I said, the large-scale overheads still apply.

Advertisement

@juliean: Thanks for the insight! That makes a lot of sense.

This topic is closed to new replies.

Advertisement