Dynamic Lightmaps
Hi,
I'd like to implement per-pixel lighting into my project. I would prefer some dynamic lightmapping method by projecting a lightmap(s) (omni-directional lights, hemispheric lighting and spot light effects).
I couldn't find anything useful by now. Just some ideas but nothing concrete. So maybe you could point me to some paper or website, that'd be great.
thx,
Flo
PS.: There's this tutorial on Ron Frazier's homepage (http://www.ronfrazier.net/apparition/research/per_pixel_lighting.html ) which is quite good, but it's also pretty much out-of-date. I think there a better and faster ways by using shader programs that do not make use of so many texture stages...
What you want to look up is projected textures.
allso shadowmaps(witch is a special case of projected textures).
http://www.humus.ca/
This is a good site for some of the stuff your looking for.
one other place to look is nvidias developer site, there is some interesting info there.
allso shadowmaps(witch is a special case of projected textures).
http://www.humus.ca/
This is a good site for some of the stuff your looking for.
one other place to look is nvidias developer site, there is some interesting info there.
www.flashbang.se | www.thegeekstate.com | nehe.gamedev.net | glAux fix for lesson 6 | [twitter]thegeekstate[/twitter]
After having played a bit with this stuff, I can say my experience.
People thinks at a tech called per-pixel lighting.
Since we are implementors, we should call it with its right name, which is per-fragment lighting but considering the trend, i'll go for the former.
The idea below the demos from Ron Franzier (or by nvidia for the same manner) are using the fixed function pipe, but they can be easily ported to the programmable pipe.
A thing that's usually unrecognized by The Conventional Wisdom is that per-pixel lighting is not a single thing. Rather, it should be divided in two methods.
Per fragment attenuation: this is possibly the easier. The only problem is that you need to find a way to put lights in object space or the vertex in the same space as light, which usually lies in world coordinates. Everything other is very straightfoward. The vertex program just needs to pass some parameters to the fragment pipe, which can lookup a texture based on distance from light. Since most of those values are uniforms and a bunch of varying, it's usually possible to compute very high amounts of lights in a single pass. Most of the time, you'll hit maximum hardware texture lookups much before you reach this stage.
Hint: don't even think at a way to make this flexible by large degrees of margins. This could easily take many months in it purest, programmable form. I speak for experience here.
Hint: unless you're willing to do some cheap hacks (which could work flawlessly, mind that) interconnecting all the components could take huge development time.
Per fragment lighting (bump mapping): that's what we all know. I don't think I should really say something about this. To be honest, I still think most people will have more to say than me.
Hint: don't compute binormal and tangent on the fly. Those datas should really be computed when the artist makes the models. Well, there's really a tradeoff here and considering the cost per MB today, I would simply put everything to disk instead of computing them every time a model is loaded.
Hint: don't abuse it. While its purest form is pretty fast on today's hardware, the vertex pipe is still stronger than the fragment pipe. If you can put vertices you should really do. I have recently seen a game in which bump mapping is abused: near a door I found some sort of broken trigger which was just a texture. This is cheap. Considering it was looking at least 20x10x5cm, it should have been real geometry.
People thinks at a tech called per-pixel lighting.
Since we are implementors, we should call it with its right name, which is per-fragment lighting but considering the trend, i'll go for the former.
The idea below the demos from Ron Franzier (or by nvidia for the same manner) are using the fixed function pipe, but they can be easily ported to the programmable pipe.
A thing that's usually unrecognized by The Conventional Wisdom is that per-pixel lighting is not a single thing. Rather, it should be divided in two methods.
Per fragment attenuation: this is possibly the easier. The only problem is that you need to find a way to put lights in object space or the vertex in the same space as light, which usually lies in world coordinates. Everything other is very straightfoward. The vertex program just needs to pass some parameters to the fragment pipe, which can lookup a texture based on distance from light. Since most of those values are uniforms and a bunch of varying, it's usually possible to compute very high amounts of lights in a single pass. Most of the time, you'll hit maximum hardware texture lookups much before you reach this stage.
Hint: don't even think at a way to make this flexible by large degrees of margins. This could easily take many months in it purest, programmable form. I speak for experience here.
Hint: unless you're willing to do some cheap hacks (which could work flawlessly, mind that) interconnecting all the components could take huge development time.
Per fragment lighting (bump mapping): that's what we all know. I don't think I should really say something about this. To be honest, I still think most people will have more to say than me.
Hint: don't compute binormal and tangent on the fly. Those datas should really be computed when the artist makes the models. Well, there's really a tradeoff here and considering the cost per MB today, I would simply put everything to disk instead of computing them every time a model is loaded.
Hint: don't abuse it. While its purest form is pretty fast on today's hardware, the vertex pipe is still stronger than the fragment pipe. If you can put vertices you should really do. I have recently seen a game in which bump mapping is abused: near a door I found some sort of broken trigger which was just a texture. This is cheap. Considering it was looking at least 20x10x5cm, it should have been real geometry.
Previously "Krohm"
Thanks for the great link and the ideas.
So how would you implement per-fragment point lights? Would you do it by using projected textures or lighting calculations in a fragment program? I like the idea of having projected textures as spot lights, because I like the effects you can have by simple using different "spot textures". How would you go with point lights and which solution is faster?
So how would you implement per-fragment point lights? Would you do it by using projected textures or lighting calculations in a fragment program? I like the idea of having projected textures as spot lights, because I like the effects you can have by simple using different "spot textures". How would you go with point lights and which solution is faster?
how about both.
Projected textures can't really do good lighting(no difuse, no specular, no attenuation).
So a fragment program is probobly the best thing for per fragment lighting.
However, you can add projective textures to that fragment program to really do some cool stuff.
Ever seen the cool lighting effects in DOOM3, those are done in a ppl fragment program with some projected textures in it.
Projected textures can't really do good lighting(no difuse, no specular, no attenuation).
So a fragment program is probobly the best thing for per fragment lighting.
However, you can add projective textures to that fragment program to really do some cool stuff.
Ever seen the cool lighting effects in DOOM3, those are done in a ppl fragment program with some projected textures in it.
www.flashbang.se | www.thegeekstate.com | nehe.gamedev.net | glAux fix for lesson 6 | [twitter]thegeekstate[/twitter]
Yeah, I'm impressed by what they are doing with those lightmaps in Doom3, either. That's why I wan't to implement it, because it gives very good looking results.
Implementing per-pixel lighting in fragment programs is no problem for me, I have read several tutorials about that topic and played around with. But I need some ideas about how to mix it with the projected textures. I will go through some of the examples on Humus's page and try to find out how to implement it...
Implementing per-pixel lighting in fragment programs is no problem for me, I have read several tutorials about that topic and played around with. But I need some ideas about how to mix it with the projected textures. I will go through some of the examples on Humus's page and try to find out how to implement it...
Quote: Original post by ZMaster
Yeah, I'm impressed by what they are doing with those lightmaps in Doom3, either. That's why I wan't to implement it, because it gives very good looking results.
Not that I want to cut the hype right there, but many people (like I did in the past) think that a good pixel shader is enough to make a nice-looking scene.
In fact, games that Doom 3 has much more than that. There are additional effect like stencil shadow volumes or render-to-texture tricks. The modeling part is also a critical issue (I'm pretty sure everyone here who has implemented bump mapping will be able to confirm that). And in the end, the level designers are awesome. They know where to put lights and objects in order to fully benefit lighting effects, especially specular lighting.
With that said, the task is not impossible. If you have a very good team of modelers and level designers (and programmers!) you can handle it too :)
Leaving the fact we are not speaking about light maps (always delighted of how people say a thing while they mean just the opposite), there's another thing I don't get from this discussion.
It looks like very few people is understanding the need for projected textures.
Projected textures are **not** meant to compute attenuation or angle attenuation.
A good lighting equation needs to evaluate projected textures, distance attenuations and angle attenuations.
The projected texture really provides just a color to the light.
The result is attenuated by a distance factor. This distance factor is not always easy to compute.
The result is then attenuated by the famous "n dot l" (diffuse light) equation.
And then you may want to add n dot h for speculars.
I implemented my experiment by using a VP/FP. For spot lights I just used the well known method (xform, set a tcGen and stuff) but without using the fixed pipe, I put everything in the VP. The FP just looked up the base texture and multiplied by the projected texture lookup. This has been then multiplied by a "foward" factor to avoid twin-lighting as you probably know. This was everything in the FP and it needed to pass a uniform for each light.
I didn't try point lights. I fear I could have some problems using this approach... I'll have to see.
*BONK*
Putting special effects in small programs is a totally wasteful application of time. Integration of all the effects in a coherent, optimized, possibly programmable technology (true definition of "engine") takes much more than just developing the sum of the technologies used.
Take for example alpha test. How much does it take to implement alpha test? You may not it takes a single line of code.
Consider now what if your engine uses shadow volumes and a heavily alphatested texture pokes in. We all know how much bad are wires for this method.
Suddendly, the whole experience could be ruined.
So, don't think because you know of this and that work you're able to build something that does both.
And not to mention the artist and copyright issues: I see some engines around which say "you can just use $this tool to build content", where $this is a tool used for another game, thus badly violating game's license.
Really, that's not a problem. For hobbysts, companies are always have been very tolerable but don't even think this is a good thing. It's a weak spot.
It looks like very few people is understanding the need for projected textures.
Projected textures are **not** meant to compute attenuation or angle attenuation.
A good lighting equation needs to evaluate projected textures, distance attenuations and angle attenuations.
The projected texture really provides just a color to the light.
The result is attenuated by a distance factor. This distance factor is not always easy to compute.
The result is then attenuated by the famous "n dot l" (diffuse light) equation.
And then you may want to add n dot h for speculars.
Quote:
So how would you implement per-fragment point lights? Would you do it by using projected textures or lighting calculations in a fragment program? I like the idea of having projected textures as spot lights, because I like the effects you can have by simple using different "spot textures". How would you go with point lights and which solution is faster?
I implemented my experiment by using a VP/FP. For spot lights I just used the well known method (xform, set a tcGen and stuff) but without using the fixed pipe, I put everything in the VP. The FP just looked up the base texture and multiplied by the projected texture lookup. This has been then multiplied by a "foward" factor to avoid twin-lighting as you probably know. This was everything in the FP and it needed to pass a uniform for each light.
I didn't try point lights. I fear I could have some problems using this approach... I'll have to see.
Quote:I also want to bump on that.
Not that I want to cut the hype right there, but many people (like I did in the past) think that...
*BONK*
Putting special effects in small programs is a totally wasteful application of time. Integration of all the effects in a coherent, optimized, possibly programmable technology (true definition of "engine") takes much more than just developing the sum of the technologies used.
Take for example alpha test. How much does it take to implement alpha test? You may not it takes a single line of code.
Consider now what if your engine uses shadow volumes and a heavily alphatested texture pokes in. We all know how much bad are wires for this method.
Suddendly, the whole experience could be ruined.
So, don't think because you know of this and that work you're able to build something that does both.
And not to mention the artist and copyright issues: I see some engines around which say "you can just use $this tool to build content", where $this is a tool used for another game, thus badly violating game's license.
Really, that's not a problem. For hobbysts, companies are always have been very tolerable but don't even think this is a good thing. It's a weak spot.
Previously "Krohm"
For a per-fragment (per-pixel) lighting model, have a look at Phong illumination. I implemented a GLSL program in RenderMonkey based on this article.
Though that article doesn't specify any calculations for distance attenuation, this is how it's done for lights in OpenGL:
The contribution of light from a light source is multiplied by an attenuation factor. The attenuation factor is calculated like this:
1 / (Kc + Kld + Kqd2)
where d is the distance between the light's position and the vertex. kc is OpenGL's GL_CONSTANT_ATTENUATION parameter. kl is GL_LINEAR_ATTENUATION, and kq is GL_QUADRATIC_ATTENUATION.
This is one way of doing distance attenuation, but you should check out the OpenGL programming guide, or the OpenGL specification for more information.
Though that article doesn't specify any calculations for distance attenuation, this is how it's done for lights in OpenGL:
The contribution of light from a light source is multiplied by an attenuation factor. The attenuation factor is calculated like this:
1 / (Kc + Kld + Kqd2)
where d is the distance between the light's position and the vertex. kc is OpenGL's GL_CONSTANT_ATTENUATION parameter. kl is GL_LINEAR_ATTENUATION, and kq is GL_QUADRATIC_ATTENUATION.
This is one way of doing distance attenuation, but you should check out the OpenGL programming guide, or the OpenGL specification for more information.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement