I apologise if this is not the type of question usually asked here. This is my first post. I appreciate your responses. I'll also say that I don't know that much about graphics programming, so I'd love to learn of something I haven't considered.
For people not familiar, some realistic first-person shooters depict looking through a magnified scope with a subviewport within the scope. This lets the scope be magnified while peripheral vision remains unchanged. This is called a Picture in Picture (PiP) scope. This is an alternative to simply magnifying the world viewport, which is seen in most shooters.
The explanation I've heard for why PiP is so expensive is that the scene has to be rendered twice. This makes sense, but there's one thing that I still wonder. PiP scopes are actually a special case of subviewports: the subview is (almost) inline with the main view but just zoomed in. So why does the scene need to be fully rendered twice? Surely everything before rasterisation in the rendering pipeline can just be reused from the main viewport. The projection of world objects to the screen would be identical, just scaled linearly according to the magnification. It seems that all the work besides actually sampling textures and filling in the screen has already been done.
Is it feasible to do something like this?