Advertisement

Blending local-global envmaps

Started by January 30, 2018 12:23 PM
11 comments, last by knarkowicz 7 years ago

In the past I developed a reflection probe/environment map system with PBR pipeline in mind. It had support for global and local environment maps. The locals only affect the scene inside their boundaries, which are OBBs essentially. There can be several local probes placed inside the scene and they are blended on top of each other (if they are intersecting). In a deferred renderer they would be rendered as boxes and they would sample from the Gbuffer the same time as lights are rendered. If there are only local probes, then there could be areas where reflection information is missing, which is undesirable with PBR rendering, as metals not receive any color. In those areas we should fall back to something, but here comes my question: what? I experimented with some solutions but I found them not really appealing:

  1. Fall back to sky color: This could work in outside areas, but indoors it will just break hard.
  2. Fall back to the probe closest to the camera: With some blending it could work so that it avoids "popping", but far away reflections will also change with the camera position, can be very distracting.
  3. Fall back to the probe closest to pixel world position: Has several problems
    1. How to determine per pixel which probe is closest?
    2. We should really retrieve 3 closest probes and blend them
    3. But 3 cubemap samples, distances, blending maybe even in object rendering shader?
    4. Maybe use 2 closest probes to camera and use those to blend? This produces straight line between affecting boundaries, but will result in popping when new probe gets close to camera which wasn't.
  4. Fall back to closest cubemap per object? Seems nice on static objects, but this can also break easily.

Does anyone have other solutions that they use? I would like to have a general solution to this problem.

Can you, instead, ensure each piece of surface is affected by at least one probe? This way you could solve the problem offline by extending probe volumes.

3.2.: I think to do this in a robust manner, you need 4 closest probes from a voroni tetrahedralization. But this approach could also replace your current OBB approach completely, so not just a 'fallback'.

Eventually build a very low resolution volume grid of pointers to existing probes?

 

Advertisement
39 minutes ago, JoeJ said:

Can you, instead, ensure each piece of surface is affected by at least one probe? This way you could solve the problem offline by extending probe volumes.

3.2.: I think to do this in a robust manner, you need 4 closest probes from a voroni tetrahedralization. But this approach could also replace your current OBB approach completely, so not just a 'fallback'.

Eventually build a very low resolution volume grid of pointers to existing probes?

 

Thank you for ideas. I wanted to avoid using many local probes to fill every surface, because they are heavier to compute than global probes (they need a ray-trace into OBB in pixel shader). The parallax effect which they bring and why I even use local probes is hardly visible on rough surfaces, so those would benefit in having just a global probe computed for them.

I also want to avoid offline methods, I want something fully real time (this is for a hobby project). But a grid of probe pointers sounds like a neat idea. I just implemented a new system for them and storing the probes inside a texturecubearray, so indexing would be easy even in a forward+ rendering object shader. A problem with a regular grid is that the local probes are projected as boxes so the box sides would be visible in the reflections and would be distracting if the probes don't fit the room. I'll be toying with this idea though.

Yeah as above you can make it into an art/content problem :)

In development builds, render any pixel not covered by a probe in flashing pink so that content creators can see the error. 

You can make the OBB/parallax correction feature optional, to allow "localised global" probes. You might bathe a whole building in a non-parallax probe, then add a few parallax probes to important rooms only. 

Going in other directions though, you can fall back to other data sets besides probes. If you have lightmaps, you can fall back to them (I've done runtime lightmap baking on a PS3/360 game once :) ), or AO bakes tinted with ambient colours. We've often defined ambience on a spherical domain with three colours - up/side/down, weighted by sat(n.y), 1-abs(n.y) and sat (1-n.y), where (n=world normal and y=up). You could define these in particular regions the same way that you define your probes currently, for cases where an artist wants to fix the global sky leaking in, but doesn't want to add the cost of another runtime probe. 

In your specific case I would go with a very simple solution:

1. Set of global probes. Nearest one covers entire scene.

2. On top of that blend your local probes.

14 hours ago, turanszkij said:

I also want to avoid offline methods, I want something fully real time

But the probe positions are still static data i guess? What limitations could you expect from an offline method? 

You could use a voxelization of the scene, flag voxels inside probe OOBs and use remaining unlit voxels to extend closest probe OOB. You already have voxelization, and it could be realtime or progressively updated if really needed.

Advertisement
18 hours ago, knarkowicz said:

In your specific case I would go with a very simple solution:

1. Set of global probes. Nearest one covers entire scene.

2. On top of that blend your local probes.

I mentioned that approach, what I dislike about it is that far away objects will have a very wrong reflection, and also when the closest envmap changes the entire scene gets re-lighted. But I will probably go with this one as this method can be implemented with no hard popping when a new envmap gets closest.

12 hours ago, JoeJ said:

But the probe positions are still static data i guess? What limitations could you expect from an offline method? 

You could use a voxelization of the scene, flag voxels inside probe OOBs and use remaining unlit voxels to extend closest probe OOB. You already have voxelization, and it could be realtime or progressively updated if really needed.

The probe locations are mostly static, but they can be grabbed in the editor and moved, and be refreshed instantly. About the voxelization, that is an interesting idea. Though I would rather go the Remedy way then, which is placing a bunch of probes in relevant spots automatically with the help of a voxel grid. Will think about your idea, maybe try to implement it as it sounds like an easy extension to voxel GI which I played around with recently.

On ‎30‎/‎01‎/‎2018 at 9:06 PM, Hodgman said:

Yeah as above you can make it into an art/content problem :)

No :D (In this case I would be delegating the problem to myself as probably only I will ever use this engine :) )

On ‎30‎/‎01‎/‎2018 at 9:06 PM, Hodgman said:

You can make the OBB/parallax correction feature optional, to allow "localised global" probes. You might bathe a whole building in a non-parallax probe, then add a few parallax probes to important rooms only. 

I want to do that, basically I just wondered how to blend between the global probes once "I leave the building". Because say that when I exit the door, we want to switch probes, now the outside environment will have the inside envmap for some time and the whole scene will blend abruptly. Btw which game did you bake lightmaps at runtime?

Remedy also had voxelized pointers towards which probes are relevant where. Heck you could go a step further (or does Remedy do this already) and store a SH probe, with channels pointing towards the relevant probes to blend. It'd be great for windows and the like, blending relevant outdoor probes would be great there.

You could even make the entire system realtime, or near to it. Infinite Warfare used deferred probe rendering for realtime GI, and Shadow Warrior 2 had procedurally generated levels lit at creation time. I seriously hope those are the right links, I'm on a slow public wifi at the moment so...

Regardless a nice trick is to use SH probes with say, ambient occlusion info, or static lighting info or something, to correct cubemap lighting. This way you can use cubemaps for both spec and diffuse, and then at least somewhat correct it later.

4 hours ago, turanszkij said:

I mentioned that approach, what I dislike about it is that far away objects will have a very wrong reflection, and also when the closest envmap changes the entire scene gets re-lighted. But I will probably go with this one as this method can be implemented with no hard popping when a new envmap gets closest.

In my solution global env maps are separated from the local ones. Global ones should capture mostly sky and generic features, and be used very sparsely (few per entire level). This way it's enough to blend just two of those to have perfect transitions and far away reflections will look fine. I actually used this system for Shadow Warrior 2, just with a small twist - probes were generated and cached in real-time. If you are interested you can check out some slides with notes: “Rendering of Shadow Warrior 2”.

22 hours ago, turanszkij said:

No :D (In this case I would be delegating the problem to myself as probably only I will ever use this engine :) )

You can still make it into a problem of manual labour per scene (a level editor task) instead of an algorithmic challenge :)

Either way you're going to find lighting bugs in the level editor so the difference is whether that prompts you to go and fix the code or massage the lighting data in the editor to hide the bug! 

22 hours ago, turanszkij said:

I want to do that, basically I just wondered how to blend between the global probes once "I leave the building". Because say that when I exit the door, we want to switch probes, now the outside environment will have the inside envmap for some time and the whole scene will blend abruptly. Btw which game did you bake lightmaps at runtime?

I was suggesting to have one truly global probe, but then use large non-parallax local probes to override it in areas (such as a whole building), and then even smaller local probes to override rooms within the buildings. You'd define a soft falloff at the edge of each local probe region for blending, and results aren't based on the camera position. 

I did lightmap baking on Don Bradman Cricket 14 (PC, PS3, 360 edition, not used in the PS4/Xbone edition though). Bakes were budgeted 1ms of GPU per frame during gameplay or 30ms of GPU per frame on loading screens. A bake took about 3 minutes during gameplay, though we also had a low quality setting if we needed one quicker. So, not useful for dynamic lights from explosions/etc, but perfectly fine for dynamic time of day. Sports games also feature lots of camera cuts (e.g. After a football player is tackled,  or a goal is scored, or before a bowler bowls in cricket) so we would wait for one of these camera cuts before switching out the old lightmap with the newest bake, so the change couldn't be noticed :)

This topic is closed to new replies.

Advertisement