Advertisement

Lighting And Rendering - Brute Force Efficient!?

Started by April 01, 2014 03:33 AM
2 comments, last by Ashaman73 10 years, 10 months ago

Hi all,

I hope this belongs here. I suppose it is not exactly a graphics programming question because it is too theoretical in nature.

I watched Carmack's talk about "Principles of Lighting and Rendering" and two things seem a little weird to me.

1.) (not so important) ... he obviously does not see procedural content generation as an important part of the future. I am pretty sure it will be, Sure, we will throw Gigabytes of data at the problems, but we will automate the content generation process and work with meta objects. Artists will want to create 1000s of objects with one click and pick the keepers / drag them into the scene. The rulesets will get extremely sophisticated and engines will be better artists, architects and engineers than humans can be. I think saying that PCG has a future in niche markets is just wrong.

2.) ... it just seems weird to me that using a pathtracing / brute force approach can be the most efficient solution to a problem. Probably there is a lot I don't know about, maybe something about voxel related optimizations makes it more efficient somehow, but I always thought some kind of light/energy rig should make updates based on changes possible ... so that the renderer does not have to start from scratch each frame.

What do you guys think? What do your guts and experiences tell you?

Is brute force pathtracing more doable than it sounds? Is there a kind of problem that a rig would be extremely bad at?

What I have in mind for the rig is this:

It is created initially, maybe stored in the map file or built initially when the level is loaded.

It would respond to geometry changes (deformation, introduction of new objects and light sources etc.)

On change it would be traversable ... the rig would know which surfaces are affected by a surface receiving/emitting more light (or less) and recusively update the energy levels, either on a vertex, polygon or even light texture level.

The goal would be getting as close as possible to not having to cast any more rays after the first direct ray knows which object it is looking at. I guess determining the reflected objects could be optimized somehow with the help of the rig.

Does that sound like crazy talk, or are there approaches that try to use a diff approach to lighting each frame ... just with a different terminology?

Any reason why bruteforcing is obviously the only viable solution or more efficient than it sounds?

Given enough eyeballs, all mysteries are shallow.

MeAndVR


he obviously does not see procedural content generation as an important part of the future.

I've invested a lot of time in procedural content generation and I have to admit, that Carmack is most probably right. The problems arise as soon as you go away from abstract content (most rogue-like games have abstract content), in other words, procedural content in a designed, non-abstract world performs really poor. There are some special cases where a certain degree of procedural content helps (terrain/foilage generation), but in general this approach is much like the old AI approach (we will have human like AI in a few years/decades vs reality: AI is really , really far away from being intelligent).


... it just seems weird to me that using a pathtracing / brute force approach can be the most efficient solution to a problem.

When Carmack talk about brute-force approaches, he often refers to the utilizsation of modern GPUs. You can compare it to a racing car. If you took the shortest route through a city or the much longer route on a highway around the city, what would be the fastest route ?

Well, GPUs are really fast, but they need a clear,simple, and long lane to really shine. If you try to 'optimize' all the time on the GPU, utilizing CPU etc, it is most likely that you build in too many corners so that the GPU can't rush through the work. GPUs are really good at brute-force solution, thought Carmack will optimize the brute-force solution too.

Advertisement

As I see it depending on artists to design worlds is not sustainable.

I also do not see any limits that are inherent to PCG.

What is there that a sophisticated procedural content generator can not do?

I guess if it is hard to streamline the update operations of a rig approach that would explain it ... kind of.

It might be significantly harder than normal shader operations. Still ... casting that many rays and checking if they intersect with thousands of polygons ... I don't think streamlining that would beat an update approach, if updating lighting information is at all feasible.

Given enough eyeballs, all mysteries are shallow.

MeAndVR


I also do not see any limits that are inherent to PCG.

You will see it once you try to use it.


What is there that a sophisticated procedural content generator can not do?

In theory you might be right, in praxis we need some proof of it yet. I'm sure that there are still people out there searching for an algorithm which solves a NP-hard problem in polynomial time wink.png

Eventual PCG might be a useful tool for the designer, but the designer is still needed for non-abstract,non-trivial,quality work.

This topic is closed to new replies.

Advertisement