When I was younger, I basically looked at some of the big title like Quake and Unreal and thought that they represented basically how all video games worked. In recent years I've come to understand that I was wrong in those assumptions.
One particular thing that I was thinking about today was lighting. In Quake and Unreal a developer would place lights in the scene and calculate the lighting, and then bake that data into the scene as lightmaps. I thought that was how all 3D engines worked, just with calculations that didn't look as good. But now I'm looking at several old games and noticing... I don't think that's how they work.
A few particular titles I'm looking at are ones on the Nintendo 64; Goldeneye, Turok 2, Perfect Dark. But when I notice the lighting in these games they don't tend to have the same fading and fall-off I see from Unreal and Quake 2. In fact, in Goldeneye the lighting is usually at a constant level, except for dark spots that might as well have been hand-painted. And it occurs to me that if that's the lighting in your game, then trying to compute lighting and saving that data as a lightmap is an abhorrent waste of space. But if it wasn't lightmap data, then how was the lighting handled?
How was lighting handled in these early 3D games? What was used to figure out where there were shadows and where there was not?