🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Quake3 ambient lighting

Started by
7 comments, last by Styves 6 years, 11 months ago

I have just noticed that, in quake 3 and half - life, dynamic models are effected from light map. For example in dark areas, gun that player holds seems darker. How did they achieve this effect ? I can use image based lighting techniques however (Like placing an environment probe and using it for reflections and ambient lighting), this tech wasn't used in games back then, so there must be a simpler method to do this.

Here is a link that shows how modern engines does it. Indirect Lighting Cache It would be nice if you know a paper that explains this technique. Can I apply this to quake 3' s light map generator and bsp format ?

www.lostchamber.com
Advertisement

Quake 3 used a light grid, it's a 3D grid of precomputed lights that it creates during map compile time, then find the closest N lights for the dynamic models that it is rendering.  It then just uses simple vertex lighting from that set of lights for that model.  This is just for dynamic models though, the scene itself is all precomputed lightmaps (though they are toggled/adjusted to achieve more dynamic effects, like a rockets light trail).  The Quake 3 code is readily available online, you can look it over yourself to get an idea of how they did it/

Here's a good description of the light volume part of the level format: http://www.mralligator.com/q3/#Lightvols

Here's an interesting article talking about actually trying to use the light volumes for rendering: http://www.sealeftstudios.com/blog/blog20160617.php

And also I think the bot/opponents are being shadowed in the darker areas of the map, although it's harder to tell because they move around very quickly, and also the dynamic lights are lighting them up when they fire their gun at you.

Lots of games from that era would trace a single ray downwards to find the floor under the player's feet, then find the lightmap-UV coordinates at that point, then sample the lightmap at those coordinates and use it as a constant ambient value for the player.

On 15.07.2017 at 4:39 PM, Hodgman said:

Lots of games from that era would trace a single ray downwards to find the floor under the player's feet, then find the lightmap-UV coordinates at that point, then sample the lightmap at those coordinates and use it as a constant ambient value for the player.

Increasing sample count and applying attenuation based on distance may produce a good result. I'll definitely give this a shot.

One of my question remained unanswered. How modern engines does this ? Is there a paper, tutorial or a chapter from a book that explains ue4's light cache or Unitiy's light probe ? Before implementing anything I like to know about this.

www.lostchamber.com
10 hours ago, afraidofdark said:

One of my question remained unanswered. How modern engines does this ? Is there a paper, tutorial or a chapter from a book that explains ue4's light cache or Unitiy's light probe ? Before implementing anything I like to know about this.

Here are some options how to store directional ambient light, any of them can be called 'probe':

 

One color for each side of a cube, so 6 directions and you interpolate the 3 values fitting a given normal. (Known as Valves ambient cube used in HL2)

Spherical Harmonics. The number of bands you use define the detail, 2 or 3 bands (12 / 27 floats) is enough for ambient diffuse.

Cube maps. Enough details for reflections (used a lot for IBL / PBR today). Lowest LOD == ambient cube.

Dual paraboloid maps (or two sphere maps). Same as cube maps, but needs only 2 textures instead 6.

 

Independent of the data format you choose there remains the question how to apply it to a given sample position. some options:

 

Bounding volume set manually by artist, e.g. a sphere or box with a soft border: You find all affecting volumes per sample and accumulate their contribution.

Uniform grid / multiresolution grids: You interpolate the 8 closest probes. E.g. UE4 light cache.

Voroni Tetrahedronilaztion: You interpolate closest 4 probes (similar to interpolating 3 triangle vertices by barycentric coords for the 2D case). AFAIK Unity uses this.

 

Also the automatic methods often require manual tuning, e.g. a cut off plane to prevent light leaks through a wall to a neighbouring room.

Notice a directionless ambient light used in Quake does not support bump mapping. The 'get ambient from floor' trick works well only for objects near the ground.

There are probably hundrets of papers talking about details.

On 7/16/2017 at 8:00 PM, afraidofdark said:

Increasing sample count and applying attenuation based on distance may produce a good result. I'll definitely give this a shot.

One of my question remained unanswered. How modern engines does this ? Is there a paper, tutorial or a chapter from a book that explains ue4's light cache or Unitiy's light probe ? Before implementing anything I like to know about this.

The "get ambient from floor" only applies to Quake 2 (and possibly Quake 1). Quake 3 uses a 3D grid, each cell contained 3 values: Direction, Ambient light color, and Direct light color. This data is filled during lightmap baking.

The grid is bound to the levels bounding box and divided by some value (default is xyz: 64,64,128). That is, each cell ranges 64 units in XY and 128 on Z (which is up in Quake).

 

Modern games do largely the same thing, however they store the results in different forms (spherical harmonics, whatever else). JoeJ covers this in the post above.

For ambient diffuse, a variation of the things JoeJ can be seen in modern engines, each with their own tradeoffs. For ambient reflections, cubemaps are used, often times hand-placed.

This topic is closed to new replies.

Advertisement