Help with ppl and lightmaps
Half-Life 2 uses a method that combines lightmaps and standard ppl, there's a paper on the nvidia developers site, perhaps you should take a look.
If at first you don't succeed, redefine success.
Quote:
Original post by python_regious
Half-Life 2 uses a method that combines lightmaps and standard ppl, there's a paper on the nvidia developers site, perhaps you should take a look.
Do you have a link to that?I can't find it.
Bugger, I can't find it either. It was certainly there...
If at first you don't succeed, redefine success.
It's at ATI's site;
http://www2.ati.com/developer/gdc/D3DTutorial10_Half-Life2_Shading.pdf
Odin
http://www2.ati.com/developer/gdc/D3DTutorial10_Half-Life2_Shading.pdf
Odin
------------------------------ BOOMZAPTry our latest game, Jewels of Cleopatra
Ah, well. Wasn't too far off. [lol]
If at first you don't succeed, redefine success.
It seems interesting.They use a technique called "Radiosity Normal Mapping",but I can't figure it out.What are they storing in the lightmap?
When they say "compute light values for each vector",do they mean that,when building the radiosity map,they use T,B,N vectors as normals and calculate 3 different light values?And then,at the pixel level,perform some kind of weighting between these 3 values based on the direction of the normal obtained from the normalmap?
[Edited by - mikeman on September 5, 2004 10:24:24 AM]
Quote:
Computing Light map Values:
•Traditionally, when computing light map values using a radiosity preprocessor,
single color value is calculated
•In Radiosity Normal Mapping, we transform our basis into tangent space
compute light values for each vector.
At the pixel level. . .
Transform the normal from a normal map into our basis
Sample three light map colors, and blend between them based the transformed vector
lightmapColor[0] * dot( bumpBasis[0], normal )+lightmapColor[1] * dot( bumpBasis[1], normal )+lightmapColor[2] * dot( bumpBasis[2], normal )
When they say "compute light values for each vector",do they mean that,when building the radiosity map,they use T,B,N vectors as normals and calculate 3 different light values?And then,at the pixel level,perform some kind of weighting between these 3 values based on the direction of the normal obtained from the normalmap?
[Edited by - mikeman on September 5, 2004 10:24:24 AM]
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement