Advertisement

Fast Cheap shadows for DOOM games?

Started by March 17, 2024 07:42 PM
34 comments, last by JoeJ 9 months ago

Repost from GMC:

Fast, cheap, and precise shadows for Doom-like 2.5 and 3D games that are robust.

I have been looking for a mathematically elegant solution to this for months, and have come up dry again and again. I was going to sell this on the marketplace for profit, but I am ready to throw in the towel at this point. We were supposed to develop a robust lighting and shadow system by 2009... we are way overdue and now we are in the dystopian timeline...GM was supposed to have this by now. So I think its important that we find a new method, more than just hoarding the discoveries to make some money from the marketplace.

Personally, I prefer shadow volumes, but they are extremely laggy on the fillrate. Of course, more modern games just use shadowmaps instead. The problem with shadowmaps is they are meant only for objects that are orthographic, they are for Minecraft but not DOOM. The minute you place a wall that is diagonally at some angle, you automatically get pixilation in the result. So people use cheap hacks such as blurring the shadows, but that causes a new problem where you don't get naturally sharp edges near creases, and you get light tunneling in near creases.

pic on the left shows an ideal candidate for shadowmapping. pic on the right would have pixilation due to angled walls

Shadowmapping is not a robust method for indoor games. For every light you need 6 textures to make a cubemap texture, and unless the lights are very short range, you might need 6 2048x2048 textures or larger. And perhaps even more perplexing, every pixel of the world mesh needs to somehow "know" what texture to sample from. So how could you have an optimized world mesh when you are limited to sampling 8 textures? You would either have to unoptimize the world mesh by dividing it into separate meshes, or create a mega texture (which would also make loading the shadowmap uv's more complicated.) Then there is the problem of having to atlas each shadowmap into the megatexture atlas, and creating an optimized texture atlas is difficult in of itself.

Then the shadowmaps often do not create very good results. For instance I played the new Mario Party game, it was either for Switch or Wii U. Even though it was a next-gen game, and top-down, the shadow maps looked terrible, like it was a graphical downgrade from previous games. So some immediately prefer ray-tracing gfx, but ray-tracing is out of the question, most people do not have ray-tracing GPUs anyway.

So I was thinking if you had a game with an overall DOOM style, you could mathematically exploit this to somehow create optimized shadows. Because DOOMs graphics are almost orthogonal in a way, only with angled walls. Maybe someone better at math than me could figure this out.

None

ReignOnU said:
So I was thinking if you had a game with an overall DOOM style, you could mathematically exploit this to somehow create optimized shadows.

You mean the flat DOOM walls should somehow correspond to a flat shadow map, achieving a 1:1 mapping so no texels are magnified, or multisampled and thus 'wasted'?

Well, you could use a skewed SM projection matrix to fit the SM to the wall as good as possible, but then you would need to render one SM for each wall it affects, so the cost goes up not down.

You could also render the shadows to the wall directly instead to a SM. Like Quake did, which brings us to baked and static lightmaps. Or if we want dynamic lighting, texture space / object space shading. That's my primary work btw. The colored texels you saw are actually a form of global parametrization, which those techniques require. Oxide Games used object space lighting in a RTS, and recently they improved it so the engine could also be used for FPS. But there is n game out yet. The primary advantage is that you don't need to relight everything every frame. You can cache the lit results and use it for many frames, updating only what actually changes.

ReignOnU said:
You would either have to unoptimize the world mesh by dividing it into separate meshes, or create a mega texture

Epic has a good shadow system in UE5, Virtual Shadow Maps. It's many small tiles, each with individual resolution so there is almost no magnification or multisampling. It requires to render many small SMs instead view big ones, and they can do this because the Nanite software rasterizer can render many views in parallel.

All of those techniques are very complex ofc. But it's good stuff and there is progress going on.

Advertisement

You mean the flat DOOM walls should somehow correspond to a flat shadow map, achieving a 1:1 mapping so no texels are magnified, or multisampled and thus 'wasted'?

Let's not limit ourselves to merely shadow maps, perhaps even we could invent a brand new shadow technique for this that has never yet been seen

You could also render the shadows to the wall directly instead to a SM. Like Quake did, which brings us to baked and static lightmaps. Or if we want dynamic lighting, texture space / object space shading. That's my primary work btw.

Good, nobody has solved this so far, you may be the first.

Probably a hybrid technique could be the best solution, but lets make things easier and assume the entire world is just a static mesh, and then go from there.

Epic has a good shadow system in UE5, Virtual Shadow Maps. It's many small tiles, each with individual resolution so there is almost no magnification or multisampling. It requires to render many small SMs instead view big ones, and they can do this because the Nanite software rasterizer can render many views in parallel.

All of those techniques are very complex ofc. But it's good stuff and there is progress going on.

What UE5 games use this, so I can review the results. For example I have seen many UE4 games that have subpar shadowmaps, shadows that suddenly appear from only a dozen meters for instance.

None

Going back, the ultimate “fast and cheap” was a dark circular blob under characters, at ground level. This was typical of “2.5D” games if they had a shadow at all.

Shadow maps, projecting shadow volumes and raytracing shadows are far more effort, but what players expect from modern games. They are not “fast and cheap” by any means, but modern machines have the processing power available for use.

ReignOnU said:
Let's not limit ourselves to merely shadow maps, perhaps even we could invent a brand new shadow technique for this that has never yet been seen

But what's your idea / proposal? It's not clear to me.

ReignOnU said:
Good, nobody has solved this so far, you may be the first. Probably a hybrid technique could be the best solution, but lets make things easier and assume the entire world is just a static mesh, and then go from there.

I have a new method for realtime GI, and it can do shadows / reflections. But it lacks high frequency details, so i can't do hard shadows or sharp reflections. Thus i'm still interested in ideas for shadows.

Personally i would prefer ray tracing over SM. Once we shift to incremental updates, SM becomes inefficient because you always need to generate all SMs every frame, even if you only need to update i small set of surface per frame. With RT, you can calculate shadows only where needed.

But sadly RT can't do LOD. We would need access to the BVH data structure to update it locally where detail changes. But the amateurs at NV and MS have overlooked this. They have blackboxed the BVH to make it easy, and now RT is practically useless until they fix their APIs and philosophy. Nanite can't be ray traced. They have to fall back to low poly proxies without LOD to use awesome RTX, and i have the same problem.

Currently Epics VSM seems the best compromise looking forwards.

ReignOnU said:
What UE5 games use this

I guess all of them. Behind Nanite and Lumen, VSM did just not receive a lot of attention from media and marketing. But it's big progress.

Another new progress is ‘RTSM’, but i may remember the name wrong. That's using (software) RT in a traditional SM to fake area lights, giving a much better approximation of soft shadows than former methods. Pretty expensive, but many current gen games do this.

frob said:

Going back, the ultimate “fast and cheap” was a dark circular blob under characters, at ground level. This was typical of “2.5D” games if they had a shadow at all.

If your characters are 2D billboards, then that's probably the best you can do. I love the look of 2D pixel art billboards, but since they're not real 3D objects, they can't really participate in a 3D shadow system.

Advertisement

a light breeze said:
I love the look of 2D pixel art billboards,

Intially, i always wanted better graphics. More bits, more colors, more frames, more details, more! More!
And i got it. And it was awesome to see the progress, and even more awesome to expect what's still ahead.

But not anymore.
Instead, i realize i like pixel art. I play this new game using Quake 1 engine, and i'm impressed from those dull low poly graphics just as much as i was back then. One of the visually most impressive games of the recent decade was Fez to me.
I also tried the new Outcast game. Total overdose of details. I'm impressed too, but i also complain a lot. The lighting is bad. Traversal stutters all the time. It feels like they wanted too much. And this feeling dominates over the impression.

So i go back to the Quake game and enjoy this much more.

I'm confused by this.
I'm not sure about of what gamers really expect.
Maybe it's just me becoming old, but in any case: Expectations obviously vary a lot.

@JoeJ Intially, i always wanted better graphics. More bits, more colors, more frames, more details, more! More!
And i got it. And it was awesome to see the progress, and even more awesome to expect what's still ahead.

Same lol. I used to always want more and more realistic graphics. But then I realized it was technocrat folly. I realized some things mainly. 1, that I just liked nature IRL and that is why I wanted realistic graphics. 2, Realistic graphics on a computer are never gonna be the same as IRL, because on a screen you are only seeing a small FOV and 2D, so for a FPS it will not be like RL because you won't be able to see humanoids in the FPS.

VR would be much closer to RL but is missing real things such as G-forces, smells etc. The natural progression of VR would be Elon's Nueralink but I do not trust Musk at all, nor do I trust spyware tech gurus. Maybe if society was some utopia with megacorporations that did not constantly give adware/spyware/crapware to people and also believed in the right to repair, then i might trust some of the tech more.

Instead, i realize i like pixel art. I play this new game using Quake 1 engine, and i'm impressed from those dull low poly graphics just as much as i was back then.

Yep. I've always been waiting for games to “improve”. But it seems most of them never really have. Its not really “next-gen gfx” if shadows pop in and out from a dozen meters? I guess people need to go back to the basics? For example, the Wii goldeneye looks like slop, the emulated N64 goldeneye looks better. I can see why some people might disagree, if they have only played on the N64 in 240p resolution.

Then, there are actually some next-gen games which have good gfx. Overall though I am unimpressed with most nextgen games I see. Even if the gfx are good, it ends up having too much detail, making everything feel too cluttery.

Going back, the ultimate “fast and cheap” was a dark circular blob under characters, at ground level. This was typical of “2.5D” games if they had a shadow at all.

Shadow maps, projecting shadow volumes and raytracing shadows are far more effort, but what players expect from modern games. They are not “fast and cheap” by any means, but modern machines have the processing power available for use.

I am talking about environment shadows only.

Perhaps we should first list all the possible ways to render environment shadows, and then concern ourselves with how to arrive there after.

So far I can think of:

Texture atlas: Store every wall and floor into a texture atlas. Since all the walls are vertical and parallel, this should be easier to optimize than a world with arbitrary shapes.

Shadow volumes: Extrude shadow polygons and then use the stencil buffer to determine the color. This is extremely laggy though and normal zbuffer ones don't work if the camera is in shadow, so you have to use DOOM3 reverse zbuffer which is more laggy usually.

Shadow maps: Store a cubemap for every light. This has a lot of problems, its easy to see on a Nintendo Switch where shadows seem to have a very short draw distance but on Xbox the shadow draw distance is small as well.

Decal shadows: Place a shadow decal onto the wall. Since the walls are parallel you can simply put decal squares, easier than trying to fit into odd shapes.

Raytracing: Get the gpu to draw the shadows per pixel. Absurd and an expensive brute force method, not elegant

I have a new method for realtime GI, and it can do shadows / reflections.

Oh does it use light probes or spherical harmonics?

None

1, that I just liked nature IRL and that is why I wanted realistic graphics.

Yeah, but i think there is a deeper meaning. We do not want realism at all. We want fantasy, other worlds… the exact opposite of realism. We want imagination.
But we want to believe in our imagination. It should feel real, not like a trick or smoke and mirrors. We can't be immersed if we notice the tricks.

So even after the realization that good graphics are not that important, we still need to work on it like before. Nothing changes.

But certainly we did invest too much into improving gfx, leaving other fields behind. Mainly character simulation. Motion capture is a bad investment, only making the cliffs in the uncanny valley more steep. We should simulate, not animate, imo. Animation is for movies, not for video games.

ReignOnU said:
Even if the gfx are good, it ends up having too much detail, making everything feel too cluttery.

Well, give a new tool to game devs, and they will overuse and exaggerate for quite some time, until they learn to use it properly and subtle.
But that's just how we are. It's probably not so bad. : )

Currently it's really easy to criticize how bad the games tend to be. I can't find a single AAA game anymore which i actually like or want to play. I can say they are all the same, made just for teens, lacking this or that while having features nobody wants. Etc.
But if look closely at modern games, they became better in all aspects. Better player controllers, better physics, better storytelling, better art, almost anything is better not worse.
So i do not really fully understand what's the problem. But there is a problem, and it's huge.

ReignOnU said:
So far I can think of: Texture atlas:

We've had that - lightmaps. But games became so big, it's no longer easily possible to store unique lightmaps and UVs for every surface. And since we want dynamic lighting anyway, it was buried and forgotten.

ReignOnU said:
Shadow volumes: Extrude shadow polygons

The more polygons you use, the more unpractical it becomes (fill rate aside). And we want soft shadows, not pixel perfect hard shadows.

ReignOnU said:
Shadow maps: Store a cubemap for every light.

There was some interesting research on compressing static SMs using DAG. It gives sharp shadows at any distance and can support large worlds. Texel lookup requires traversal, but authors claimed it's still faster than dynamic shadow cascades.
But it's restricted to static lighting, so no time of day or dynamic objects. And it's just shadows. If we go to bake, we want full GI ideally.

ReignOnU said:
Oh does it use light probes or spherical harmonics?

Yeah, basically i generate probes at high density on the surface. Iirc., i could update 60k probes at 60fps on PS4 HW. Like rendering 60.000 tiny frame buffers. Bounces are infinite, and emmisive materials for light sources are free.
I use 4x4 pixel half sphere maps currently. SH or other probe formats would work too, but costly lookup for no win.

Shadows and filtering

Let me show you a bit on shadows and filtering - the following few images have shadows with various filters/approaches and GI turned on. The GI makes the most in given frame.

For GI this currently uses Sparse Voxel Octree based global illumination. Showing shadow methods with full path tracing would make no sense - as perfect realistic shadows are by-product of the algorithm.

Ray Tracing

Going fully ray traced shadows often yields the best results, like:

The holy grail - just use realtime ray tracing with enough samples. It works, but is heavy.

Shadow Maps

Percentage Close Filtering

Using standard shadow maps with PCF filter can yield similar results, while using shadow maps, like:

Percentage Close Filtering on soft shadows produces good enough shadows for most cases.

You can notice that this produces soft shadows that are soft everywhere at about the same rate - but what we really want is often penumbrae shadows (like in the ray traced case - i.e. more soft with growing distance between caster and receiver).

Percentage Closer Soft Shadows

We can use somewhat clever filter like PCSS to do so, like:

Percentage Closer Soft Shadows - while the calculation is approximate and straightforward, it produces penumbrae shadows that are very good. These should easily cover almost all cases where area lights are needed at high performance.

Unreal Engine's RTSM

I also attempted to do RTSM like shadows (which UE did), but my implementation is not finished yet and does something incorrect - so the shadows “grow”. Current results:

My attempt to RTSM variant. I attempted to do it correctly, but it still has few bugs.

Shadow Volumes

In the past I did attempted to do shadow volumes (successfully) using volume extrusion in geometry shader. It just ends up being heavy (fillrate and memory - as you need additional information in geometry - to properly build silhouette). This also brings more problems like self shadowing.

There is a way to make shadow volumes penumbra based instead of just pixel perfect, it just becomes even heavier on fillrate. If you're interested - these are called penumbra wedges.

With growing geometry - these became irrelevant. Nowadays at some point (in terms of geometric complexity) it will be cheaper to just do pixel-perfect ray traced shadow instead of shadow volumes.

Handling Shadow Maps

Handling shadow maps in actual scenes can be a bit problematic. Shadow maps tend to be very high performant - but still you don't want to render what you won't see … and also having multiple lights with various setup conditions mean one thing → you want to prioritize some over others.

The following example is from model viewer/compressor/… one of my engine tools. Please don't mind the GUI, it still needs a lot of work. This view is just lit+textured (no GI).

The important part is in that yellow debug dialog, showing your shadow data for this frame. It contains 4-cascades for directional light, multiple faces for point light and larger buffer for spotlight. Unlike UE5 solution, this isn't virtualizing shadow to separate pages, but rather uses texture-atlas based approach, where memory is assigned to shadow based on user's request and settings.

The shadows are currently rendered each frame, but practically they don't need to unless anything moves (is dynamic) within given frustum. These can be optionally cached.

Shadows can also use diferent filters (probably not observable in this screenshot) - point light uses PCF, while spot light and directional light use PCSS.

For curious - buffer is yellow, because 2 channels are stored - depth and depth^2. For VSM support (Variance Shadow Maps) - but practically I don't use them for light leaking issues being too strong, and over darkening tends to ruin penumbras (in PCF/PCSS case).

Global Illumination

Global illumination also produces secondary shadows like in the following image:

I had to over-expose that and increase contrast - but around the vases you can see secondary shadow casted by light reflected from the wall.

In this case these are from Sparse Voxel Octree Global Illumination - you can get a lot better results with path tracing - overall they won't have that big impact on image in lit parts, but massive impact in parts lit only by indirect light (in shadows).

There are various methods how to do such effect faking it, like:

  • Imperfect Shadow Maps (dynamic)
  • Voxel Global Illumination (Cone Tracing) (dynamic)
  • Light Mapping (baked)
  • Filtered Path Tracing result
  • (Dynamic) Light Probe Global Illumination
  • etc.

Or just do it correctly with either:

  • Unbiased Path Tracing (+ variants)
  • Unbiased Progressive Photon Mapping

Sadly - the topic of any of these is far beyond the scope of forum post.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

This topic is closed to new replies.

Advertisement