Hi all,
I hope this belongs here. I suppose it is not exactly a graphics programming question because it is too theoretical in nature.
I watched Carmack's talk about "Principles of Lighting and Rendering" and two things seem a little weird to me.
1.) (not so important) ... he obviously does not see procedural content generation as an important part of the future. I am pretty sure it will be, Sure, we will throw Gigabytes of data at the problems, but we will automate the content generation process and work with meta objects. Artists will want to create 1000s of objects with one click and pick the keepers / drag them into the scene. The rulesets will get extremely sophisticated and engines will be better artists, architects and engineers than humans can be. I think saying that PCG has a future in niche markets is just wrong.
2.) ... it just seems weird to me that using a pathtracing / brute force approach can be the most efficient solution to a problem. Probably there is a lot I don't know about, maybe something about voxel related optimizations makes it more efficient somehow, but I always thought some kind of light/energy rig should make updates based on changes possible ... so that the renderer does not have to start from scratch each frame.
What do you guys think? What do your guts and experiences tell you?
Is brute force pathtracing more doable than it sounds? Is there a kind of problem that a rig would be extremely bad at?
What I have in mind for the rig is this:
It is created initially, maybe stored in the map file or built initially when the level is loaded.
It would respond to geometry changes (deformation, introduction of new objects and light sources etc.)
On change it would be traversable ... the rig would know which surfaces are affected by a surface receiving/emitting more light (or less) and recusively update the energy levels, either on a vertex, polygon or even light texture level.
The goal would be getting as close as possible to not having to cast any more rays after the first direct ray knows which object it is looking at. I guess determining the reflected objects could be optimized somehow with the help of the rig.
Does that sound like crazy talk, or are there approaches that try to use a diff approach to lighting each frame ... just with a different terminology?
Any reason why bruteforcing is obviously the only viable solution or more efficient than it sounds?