Advertisement

Voxel Cone Tracing constraints and flexibility

Started by February 07, 2018 05:58 AM
11 comments, last by Vilem Otte 6 years, 11 months ago

Hi all. I have been looking for a real-time global illumination algorithm to use in my game. I've found voxel cone tracing and I'm debating whether or not it's an algorithm worth investing my time researching and implementing. I have this doubt due to the following reasons:

. I see a lot of people say it's really hard to implement.

. Apparently this algorithm requires some Nvidia extension to work efficiently according to the original paper (I highly doubt it though)

. Barely real-time performance, meaning it's too slow to be implemented in a game 

 

So in order to determine if I should invest time in voxel cone tracing, I want to ask the following questions:

. Is the algorithm itself flexible enough so that I can increase the performance by tweaking it (probably lowering the GI quality at the same time, but I don't care)

. Can I implement it without any driver requirement or special extensions, like the paper claims?

A programmer interested in game technology. 

The Nvidia paper is... unreliable. Cone tracing is potentially fast, the problem is lightleak makes its hard to implement reliably. By cone tracing's nature the farther you trace the more lightleak you get. But the shorter a cone you trace the less light you get, overall it was an idea that seemed like the future two+ years ago but has since fallen out due to its weaknesses.

There are a lot of other GI techniques that can be considered depending on your requirements. EG is the environment static, or highly deformable, or runtime generated? Does light need to move fast or can it move slowly (EG a slow time of day?).

That being said Signed Distance Field tracing and some version of lightcuts/many lights looks like it could, potentially, do what cone tracing once promised in realtime. Here's a nice presentation on signed distance fields, which is essentially a sparse voxel octree from cone tracing but you "sphere trace" instead of doing a cone. Benefits therein being no lightleak. Lightcuts/VPLs/"Many Lights" would be other half of the equation. Here's a nice presentation from Square Enix, wherein the biggest cost they have in the test scene is their choice of "adaptive imperfect shadow maps" which is a really hacky and slow way to do what SDF tracing can do easier and faster.

Advertisement

Thank you for your insight. It seems like I have to reconsider what GI algorithm to use. This is the constraints in my game:

. Single directional light source. It never changes.

. Mesh is mostly static except for characters that run around. 

. I want one bounce of diffuse indirect light. 

 

I'd appreciate it if you can suggest the GI algorithm that fits my criteria. Thank you for your answers. 

A programmer interested in game technology. 

1 hour ago, MonterMan said:

. Apparently this algorithm requires some Nvidia extension to work efficiently according to the original paper (I highly doubt it though)

The extension (conservative rasterization - i think AMD Vega has it finally) is not necessary. You can use geometry shader to extend triangles (slower of course). You can also accept some holes and ignore this completely, because light will leak anyways.

In addition to Frentics response: Voxelization requires LODs in practice, which means it always breaks down at distance. Empty space between two walls vanishes and becomes entirely solid, so no light reaches that space. That's the opposite of leaking light but comes from the same limitations.

6 minutes ago, MonterMan said:

Thank you for your insight. It seems like I have to reconsider what GI algorithm to use. This is the constraints in my game:

. Single directional light source. It never changes.

. Mesh is mostly static except for characters that run around. 

. I want one bounce of diffuse indirect light. 

Baking makes sense. How dynamic is your lighting? Time of day? Moving sun? Mainly interiors with static lighting?

Lighting is completely static. Sun never moves and it's always the same time of day. Mainly outdoor scenes with small buildings that have interior. 

 

Thanks for the additional info on voxel cone tracing's shortcomings. 

A programmer interested in game technology. 

There seems totally no need for any kind of real time GI, not even something supporting just dynamic lights on static world.

So you could bake everything to lightmaps, but there are many options on what to store in a texel:

Just diffuse like Quake3 did.

Add a primary light direction to support normal mapping.

Store full enviroment to support full BRDF.

See here for Introduction and details: https://mynameismjp.wordpress.com/2016/10/09/new-blog-series-lightmap-baking-and-spherical-gaussians/

 

For dynamic objects you could use irradiance probes,

either placed in a grid,

or placed by hand with some radius (or shape) of influence,

or interpolationg 4 closest hand placed ones from a voroni tetrahedralization.

 

You could also merge those static / dynamic approaches like Quantum Break did (they use voxels instead lightmap texels): https://users.aalto.fi/~silvena4/Publications/SIGGRAPH_2015_Remedy_Notes.pdf

 

For reflections probes are state of the art, with the same options as listed for dynamic objects above. Extended with screen space raytracing.

 

So... some work. Probably not really easier to implement than voxel cone tracing (work goes into pre-processing tools), but faster, higher quality, and much less issues.

 

Edit: The difference between one bounce and infinite bounces can be night and day in interiors, so don't forget to utilize this if you go baked.

 

 

 

Advertisement

Got it. I was initially looking for a real-time GI algorithm because I didn't want do a separate preprocess step. But now it seems like baking is the way to go to achieve the highest quality, so it's worth it. Thank you for the awesome resources. That's indeed some work, but it's going to be worth it in the end :). 

 

I actually looked into baking lightmaps before, one that caught my attention is the light precomputation in The Witness: https://web.archive.org/web/20170227054745/http://the-witness.net/news/2010/09/hemicube-rendering-and-integration/ that is based on a radiosity algorithm: https://web.archive.org/web/20120324095518/http://freespace.virgin.net/hugo.elias/radiosity/radiosity.htm.

But I gave up on the above methods because they require a ton of tweaking to get the result just right, so I looked into real-time stuff instead. 

 

Again thanks for the help guys, now it's pretty clear to me what to do. 

A programmer interested in game technology. 

The Witness tool to get automatic UVs is on github: https://github.com/Thekla/thekla_atlas

But Microsofts UV Atlas is also pretty nice, maybe better: https://github.com/Microsoft/UVAtlas

Form that you could get a 'surfel' representation of the scene (disks with the area and the normal for each lightmap texel).

Then calculating diffuse lighting is easy to understand, like my 'educational' code below. Tracing for visibility is missing, if you add this with some tree for acceleration you already have a solution. (each call To simulateOneBounce adds one bounce so accuracy increases.)

 


	struct Radiosity
{
    typedef sVec3 vec3;
    inline vec3 cmul (const vec3 &a, const vec3 &b)
    {
        return vec3 (a[0]*b[0], a[1]*b[1], a[2]*b[2]); 
    }
	
    struct AreaSample
    {
        vec3 pos;            
        vec3 dir;            
        float area;
	        vec3 color;            
        vec3 received;        
        float emission; // using just color * emission to save memory
    };
	    AreaSample *samples;
    int sampleCount;
	    void InitScene ()
    {
        // simple cylinder
	        int nU = 144;
        int nV = int( float(nU) / float(PI) );
        float scale = 2.0f;
                
        float area = (2 * scale / float(nU) * float(PI)) * (scale / float(nV) * 2); 
            
        sampleCount = nU*nV;
        samples = new AreaSample[sampleCount];
	        AreaSample *sample = samples;
        for (int v=0; v<nV; v++)
        {
            float tV = float(v) / float(nV);
	            for (int u=0; u<nU; u++)
            {
                float tU = float(u) / float(nU);
                float angle = tU * 2.0f*float(PI);
                vec3 d (sin(angle), 0, cos(angle));
                vec3 p = (vec3(0,tV*2,0) + d) * scale;
	                sample->pos = p;
                sample->dir = -d;
                sample->area = area;
	                sample->color = ( d[0] < 0 ? vec3(0.7f, 0.7f, 0.7f) : vec3(0.0f, 1.0f, 0.0f) );
                sample->received = vec3(0,0,0);
                sample->emission = ( (d[0] < -0.97f && tV > 0.87f) ? 35.0f : 0 );
	                sample++;
            }
        }
    }
	    void SimulateOneBounce ()
    {
        for (int rI=0; rI<sampleCount; rI++) 
        {
            vec3 rP = samples[rI].pos;
            vec3 rD = samples[rI].dir;
            vec3 accum (0,0,0);
	            for (int eI=0; eI<sampleCount; eI++)
            {
                vec3 diff = samples[eI].pos - rP;
	                float cosR = rD.Dot(diff);
                if (cosR > FP_EPSILON)
                {
                    float cosE = -samples[eI].dir.Dot(diff);
                    if (cosE > FP_EPSILON)
                    {
                        float visibility = 1.0f; // todo: In this example we know each surface sees any other surface, but in Practice: Trace a ray from receiver to emitter and set to zero if any hit (or use multiple rays for accuracy)
	                        if (visibility > 0)
                        {
                            float area = samples[eI].area;
                            float d2 = diff.Dot(diff) + FP_TINY;
                            float formFactor = (cosR * cosE) / (d2 * (float(PI) * d2 + area)) * area;
                        
                            vec3 reflect = cmul (samples[eI].color, samples[eI].received);
                            vec3 emit = samples[eI].color * samples[eI].emission;
                            
                            accum += (reflect + emit) * visibility * formFactor;
                        }
                    }            
                }
            }
            
            samples[rI].received = accum;
        }
    }
	    void Visualize ()
    {
        for (int i=0; i<sampleCount; i++)
        {
            vec3 reflect = cmul (samples[i].color, samples[i].received);
            vec3 emit = samples[i].color * samples[i].emission;
	            vec3 color = reflect + emit;
	            //float radius = sqrt (samples[i].area / float(PI));
            //Vis::RenderCircle (radius, samples[i].pos, samples[i].dir, color[0],color[1],color[2]); 
            
            float radius = sqrt(samples[i].area * 0.52f);
            Vis::RenderDisk (radius, samples[i].pos, samples[i].dir, color[0],color[1],color[2], 4); // this renders a quad
        }
    }
	};
	

 

 

 

 

 

I forgot to mention the  alternative to the above.

You could just export your scene to a 3D rendering tool like Blender, use its automatic UV unwrapping, bake stuff and done. This should be automatable and much less work.

I do not know how and if it is possible to get directional lighting information but i assume there are options.

 

...or you use something like Mitsuba / Embree to calculate lighting. That sounds felxible enough and no unsolveable issues could pop up.

 

 

With voxel cone tracing your result will look like something like this (to show something you can expect with real time GI):

tmp.thumb.png.e1e3b3dc8eceb8a29501fd1f79e53f98.png

Which is single-bounce dynamic indirect illumination, and you can also re-use these for reflections. Notice-able flaws:

  • Light-bleeding (on the bottom-right side, light bleeding for green model is notice-able in shadowed area)
  • Dark shadows (especially cast by the object in the middle - this is due to only single-bounce global illumination)
  • Visible aliasing (even though using 8x MSAA, due to GI being computed in non-MSAA buffer - aliasing is going to be visible, I intentionally didn't use any more-hacky method for upsampling
  • Some details not contributing to global illumination (due to low voxel resolution)

 

So far I've tried multiple solutions for GI over the years, and nothing can beat unbiased methods (Path Tracing or Progressive Photon Mapping), yet those require a lot of samples to converge. Out of other methods (Reflective Shadow Mapping + Imperfect Shadow Maps being another method that had acceptable quality results), Voxel Cone Tracing is a winner for me in terms of quality and performance.

Of course pre-computed methods are another chapter, which are not really applicable to my case though.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

This topic is closed to new replies.

Advertisement