Advertisement

Antialiasing: How to Approach It

Started by June 02, 2017 06:32 PM
4 comments, last by Thaumaturge 7 years, 8 months ago

For a while now, I've been using my chosen engine's internal antialiasing for my current game project. It seemed to work well, and there didn't seem to be a major performance impact. I believe that the specific antialiasing technique used is multi-sampling. The engine in question is Panda3D.

Recently, however, I discovered while doing some performance tuning that (A) the engine-provided function that toggles antialiasing from within the game seemed to no longer work, and (B) that I did get a significant performance gain by disabling the request for the main window to be multi-sampled.

Issue (A) seems as though it may be a driver-related issue: If I request a multi-sampled buffer, and allow for antialiasing in the NVidia control panel, I get antialiasing--regardless of the engine-provided method that's supposed to toggle it. If I don't request such a buffer, or disallow antialiasing in the control panel, I (understandably) don't get antialiasing.

This might not be a major problem if not for issue (B): it seems to me that antialiasing is something that I want to be able to toggle in the graphics options, and I want that toggling to be effective in terms of performance gain (where appropriate).

As far as I see, changing whether or not the main window's buffer is multi-sampled would call for reopening the window--in this case, the main program window. This seems a little drastic, to my mind. Or am I mistaken, here?

Which brings me to my core question in this thread: How do modern games generally handle changes in antialiasing settings? Do they destroy and remake their windows? Is antialiasing usually handled by a shader operating on the output of an offscreen buffer, which can thus (I imagine) be more easily swapped out? Something else that hasn't occurred to me?

MWAHAHAHAHAHAHA!!!

My Twitter Account: @EbornIan

If you clearly explain that in case of bad performance MSAA is the first thing that should be disabled, does it matter whether it is controlled in-game or in the graphics adapter configuration?

Omae Wa Mou Shindeiru

Advertisement

Forcing settings from within a driver control panel is a hack. These hacks can even break some games if they're rendering data instead of colours, etc...

Creating the window and creating the GL/D3D back-buffer (a.k.a. "swap chain") are two different operations. You should be able to recreate/modify the swap chain without having to recreate the window.

However, any modern game that uses post-processing techniques will never create an MSAA back-buffer, even if they use MSAA! They will render to an MSAA texture, resolve to a non-MSAA texture, perform post-processing using non-MSAA textures, and end up with the results in the non-MSAA back-buffer. In these situations you just have to re-create your main scene rendering texture.

Forcing settings from within a driver control panel is a hack. These hacks can even break some games if they're rendering data instead of colours, etc...


Even if it weren't so, I don't want to ask my players to find and mess with driver utilities.

Creating the window and creating the GL/D3D back-buffer (a.k.a. "swap chain") are two different operations. You should be able to recreate/modify the swap chain without having to recreate the window.


Hmm... I might look into whether the engine exposes this functionality--if I don't go with the method described below, of course.

However, any modern game that uses post-processing techniques will never create an MSAA back-buffer, even if they use MSAA! They will render to an MSAA texture, resolve to a non-MSAA texture, perform post-processing using non-MSAA textures, and end up with the results in the non-MSAA back-buffer. In these situations you just have to re-create your main scene rendering texture.


I'm not actually using any post-processing techniques at the moment, I believe. I've looked briefly into implementing either screen-space ambient occlusion or parallax mapping, but looking at the algorithms I fear that they might impact performance more than I'd like.

Still, perhaps it's worth switching to render-to-texture specifically to implement easily-switchable antialiasing?

For the sake of clarity, I'd like to check please that I'm correctly understanding some of the terms that you use above: you refer to the back-buffer and rendering to an off-screen texture, which might be read to suggest that the texture is not a buffer. Am I correct in understanding that this is not so? The back-buffer then refers specifically to the buffer used for double-buffering, and the off-screen texture is simply another buffer? (This seems to be the way that Panda3D refers to such things--see the first paragraph of this manual page.)

MWAHAHAHAHAHAHA!!!

My Twitter Account: @EbornIan

"Back-buffer" specifically refers to a render target that belongs to a "swap chain" of buffers that are used to present to a window or screen, where that chain typically consists of two separate buffers that you swap every time you present. The terminology can vary a bit between API's and engines, and these terms are used by D3D/DXGI API's. In some other places you might see it called a "flip" operation instead of a "present", since you're "flipping" between front and back buffers. You may also see the back buffer referred to as a "frame buffer", which is an older term that dates back to days when graphics hardware had a specific region of memory that was dedicated to scanning out images to the display.

So in general you have textures, which are typically read-only. But you can also have textures that the GPU can write to, and then possibly read from later. In D3D these are called "render targets", since they can be the "target" of your rendering operations. With that terminology your back-buffer is really just a special render target, and the latest versions of the API absolutely work that way. In older D3D versions it was common to refer to non-backbufffer render targets as "off-screen targets", or "off-screen textures", since back then it was rather unusual to have a render target that wasn't tied to a swap chain. So you might say "I'm going to render this shadow to an off-screen texture, then I'll read from it when I'm actually rendering to the screen using the back-buffer". In modern times it's possible for an engine to go through dozens of different render targets in a single frame, and typically you'll do majority of your rendering to "off-screen" targets instead of the back-buffer. A typical setup might go like this:

  • Render Z Only to depth buffer
  • Render to N G-Buffer Targets
  • Render SSAO to RT
  • Render Shadow Maps
  • Read G-Buffer/Depth/SSAO/Shadows and Render Lighting to RT
  • Read lighting RT and perform Bloom + Motion Blur + DOF Passes
  • Read Lighting/Bloom/MB/DOF results and combine, perform tone mapping, write to back-buffer
  • Render UI
  • Present

In this context it's really the back-buffer that's the "unusual" target, and the terminology and newer API's tend to reflect that.

As for MSAA...back in the days before we used dozens of render targets and games typically just rendered directly to the back-buffer/framebuffer, the way you would use MSAA would be to specify that you wanted MSAA when you created the swap chain, and then your back-buffer would just magically have MSAA. You'd draw to it, present, and it would Just Work without you really needing to do anything as a programmer. Behind the scenes the GPU had to do some work with your MSAA target, since it's not natively presentable on its own. It has to be resolved, which is an operation that combines the individual sub-samples to create a non-MSAA image that can be shown on the screen. Up until D3D12 you could still do things this way: you could specify an MSAA mode in the swap chain parameters, render to an MSAA swap chain, and the driver would resolve for you when you call Present. But it doesn't make sense to do this if you have a setup like the one I outlined above, since you've already done a bunch of post-processing operations and UI rendering by the time you reach the back-buffer, and you probably only want MSAA for your "real" geometry passes. So instead you'll create your own MSAA render target, and manually resolve to a non-MSAA render target. For old-school forward rendering this can be pretty simple: there's a dedicated API for resolving, and it will let the driver/GPU do it for you. However starting with D3D10 you can also do your own resolves in a shader, which is required for deferred rendering and/or for achieving higher-quality results with HDR. Even if you're just doing a normal resolve there's no downside to creating your own MSAA target instead of creating the swap chain with MSAA, since the same thing would happen behind the scenes if you created an MSAA swap chain. So I would recommend doing that, since it will set you up for doing more advanced rendering with post-processing or other techniques that require additional render targets.

Getting back to the issue of forcing MSAA through the driver control panel: in the older days when people just rendered to the back-buffer, it was really easy for the driver to force MSAA to be enabled without the app knowing about it. It would just silently ignore the MSAA parameter when creating the swap chain, and replace it with something else. Back then everyone just did simple forward rendering, so everything would once again "just work". These days it's not nearly so simple. Even forward-rendered games often go through many render targets, and so the driver would have to carefully choose which render targets get silently promoted to MSAA. On top of that, it would have to figure out a point in the frame where it could sneak in a resolve operation before anybody reads from those render targets, since the results would be broken if they didn't do this. In a deferred setup like the one I outline above there's really no way for the driver to do it, since MSAA handles requires invasive changes to the shader code (this is often true in modern forward-rendering setups, which often make use of semi-deferred techniques that require special handling for MSAA). This is what Hodgman is referring to when he says that it's a hack that can break games, and why nobody should ever turn it on anymore (I don't even know why they still have it in the control panel). As a developer, the only sane thing you can do is ignore that feature and entirely, and hope that nobody turns it on when they run your game or app.

Ah, interesting--thank you for the in-depth explanation! ^_^

Regarding the back-buffer and "textures" vs. "buffers", if I read you correctly then I was indeed correct in my understanding (if perhaps a little out-of-date). Excellent--setting up off-screen buffers is pretty straight-forward in Panda3D (indeed, I'm already creating two such in order to render shadow-maps (I have two major light sources)).

Regarding MSAA, I see. As noted above, I'm currently not using any post-processing stages (although reading your response, it occurs to me that handling shadows in a post-processing stage might be more efficient than doing so in my general fragment shaders...). However, it's again quite easy to switch over to doing so--indeed, one version of what eventually became this project used a post-processing stage to make the scene look like a "drawing".

This calls for looking into how Panda3D handles that manual resolution of MSAA that you mention, I think--it may well be done behind the scenes. If I recall correctly, I can request a number of "multisamples" when creating an off-screen buffer.

Regarding the driver-based MSAA and the control panel, ah--I didn't realise that it was quite that serious! Thank you for the warning! o_o

MWAHAHAHAHAHAHA!!!

My Twitter Account: @EbornIan

This topic is closed to new replies.

Advertisement