2 questions about lighting...
I have 2 small questions :
Q1: I'm trying to do PPL with specular lighting, and I'm stuck with this half-angle... I thought the half angle was to be computed between the fragment->light and the fragment->camera vectors. When comparing with OpenGL lighting, it seems OpenGL uses the camera's forward vector instead of fragment->camera to compute the HA ; specular illumination moves when I rotate the camera. Ain't it weird ? What is the *real* half-angle ?
Q2: Is it better to pass the tangent and the binormal to the vertex program or to pass only the tangent and compute the binormal using a cross-product in the VP ?
Thanx a lot in advance !
SaM3d!, a cross-platform API for 3d based on SDL and OpenGL.The trouble is that things never get better, they just stay the same, only more so. -- (Terry Pratchett, Eric)
>> What is the *real* half-angle ?
Half-angle at a point is the sum of the point-to-light vector and the point-to-camera vector.
But for this to work you must work with normalized vectors (point-to-light and point-to-camera).
And then it's better to normalize the half-angle vector too, but not required if you use lighting techniques with lookup textures such as the NHHH map thingy (but this requires hardware that can perform texture-dependant reads, eg GeForce3/4Ti/+ or Readon8500+).
>> Is it better to pass the tangent and the binormal to the vertex program or to pass only the tangent and compute the binormal using a cross-product in the VP ?
The most common way is pre-computing binormal : it saves vertex program instructions, so it's faster to execute and allows you to write longer vertex programs if you're instruction-limited.
If you get to use skinning matrix at the vertex program level, you will probably prefer sending T and N only and compute B=N/\T because it's almost as fast to compute a cross product (between N and T) than few dot products (transform binormal with skinning matrix), and it saves some memory and saves the extra-process required to pre-compute binormals.
Side note
Though, there is a case where the binormal and even the tangent should be recomputed in the vertex program : if you use non-uniform scaling in your modelview or skinning matrix (especially skinning).
Please excuse my wording, I'll continue in French for some time ...
<french>
(désolé si tu as l'impression que je te donne des cours de maths. mon intention n'est certainement pas de t'humilier ^^' mais plutôt de te donner les billes utiles à la compréhension du problèmes)
Comme tu dois le savoir, les translations et les rotations n'affectent pas l'orthogonalité des référentiels. C'est-à-dire que si tu as un triplet (i,j,k) avec i_|_j et i_|_k, et que tu as une matrice de transormation M composée uniquement de translation et de rotation, alors la transformation via M des vecteurs i, j et k en i', j' et k' conserve l'orthogonalité : tu auraus i'_|_j' et i'_|_k'. Ceci est aussi vrai si la matrice de transformation comporte un facteur d'échelle uniforme (le facteur d'echelle en X égale celui en Y et celui en Z, genre glScale(2,2,2)). Mais lorsque l'échelle n'est pas uniforme (par exemple glScale(1,5,2)) alors l'orthogonalité n'est plus respectée.
(fin du cours de maths)
</french>
In this case the TBN triplet is not orthogonal anymore and you have to rebuild it. For this to work, send the N and T vectors in the vertex program, transform them using skinning matrices etc, and compute something like :
N = normalize(N)
T = normalize(T - N*dot3(N,T))
B = cross(N,T) // don't need to normalize
Half-angle at a point is the sum of the point-to-light vector and the point-to-camera vector.
But for this to work you must work with normalized vectors (point-to-light and point-to-camera).
And then it's better to normalize the half-angle vector too, but not required if you use lighting techniques with lookup textures such as the NHHH map thingy (but this requires hardware that can perform texture-dependant reads, eg GeForce3/4Ti/+ or Readon8500+).
>> Is it better to pass the tangent and the binormal to the vertex program or to pass only the tangent and compute the binormal using a cross-product in the VP ?
The most common way is pre-computing binormal : it saves vertex program instructions, so it's faster to execute and allows you to write longer vertex programs if you're instruction-limited.
If you get to use skinning matrix at the vertex program level, you will probably prefer sending T and N only and compute B=N/\T because it's almost as fast to compute a cross product (between N and T) than few dot products (transform binormal with skinning matrix), and it saves some memory and saves the extra-process required to pre-compute binormals.
Side note
Though, there is a case where the binormal and even the tangent should be recomputed in the vertex program : if you use non-uniform scaling in your modelview or skinning matrix (especially skinning).
Please excuse my wording, I'll continue in French for some time ...
<french>
(désolé si tu as l'impression que je te donne des cours de maths. mon intention n'est certainement pas de t'humilier ^^' mais plutôt de te donner les billes utiles à la compréhension du problèmes)
Comme tu dois le savoir, les translations et les rotations n'affectent pas l'orthogonalité des référentiels. C'est-à-dire que si tu as un triplet (i,j,k) avec i_|_j et i_|_k, et que tu as une matrice de transormation M composée uniquement de translation et de rotation, alors la transformation via M des vecteurs i, j et k en i', j' et k' conserve l'orthogonalité : tu auraus i'_|_j' et i'_|_k'. Ceci est aussi vrai si la matrice de transformation comporte un facteur d'échelle uniforme (le facteur d'echelle en X égale celui en Y et celui en Z, genre glScale(2,2,2)). Mais lorsque l'échelle n'est pas uniforme (par exemple glScale(1,5,2)) alors l'orthogonalité n'est plus respectée.
(fin du cours de maths)
</french>
In this case the TBN triplet is not orthogonal anymore and you have to rebuild it. For this to work, send the N and T vectors in the vertex program, transform them using skinning matrices etc, and compute something like :
N = normalize(N)
T = normalize(T - N*dot3(N,T))
B = cross(N,T) // don't need to normalize
Thanx a lot for your answers Vincoof !
No problem with your teaching me mathematics :) , I just didn't think about non-uniform scaling...
I'm a bit annoyed with the half angle thing: would it mean there's a problem with OpenGL lighting implementation (or rather some option I don't know about) ?
Shouldn't the specular illumination be "aligned" with the center of the screen ?
No problem with your teaching me mathematics :) , I just didn't think about non-uniform scaling...
I'm a bit annoyed with the half angle thing: would it mean there's a problem with OpenGL lighting implementation (or rather some option I don't know about) ?
Shouldn't the specular illumination be "aligned" with the center of the screen ?
SaM3d!, a cross-platform API for 3d based on SDL and OpenGL.The trouble is that things never get better, they just stay the same, only more so. -- (Terry Pratchett, Eric)
Sorry, I'm too stupid ; I should read docs...
The answer is:
glLightModeli(GL_LIGHT_MODEL_LOCAL_VIEWER, GL_TRUE);
The answer is:
glLightModeli(GL_LIGHT_MODEL_LOCAL_VIEWER, GL_TRUE);
SaM3d!, a cross-platform API for 3d based on SDL and OpenGL.The trouble is that things never get better, they just stay the same, only more so. -- (Terry Pratchett, Eric)
Does it means you got it working now ?
(sorry for ther late reply but this night I couldn't access gamedev servers)
(sorry for ther late reply but this night I couldn't access gamedev servers)
vincoof: Yes, everything is working now.
In fact, I had most things working from the beginning except this OpenGL lighting parameter I didn't know about.
As for when/how calculating the binormal you have answered my question (I'm generating it in the VP right now, but changing this is extremely easy).
I now have PPL with and without textures and bump mapping. Only things missing are PPL on (animated) models, light attenuation and some kind of gloss map.
Hmmm, talking about gloss map, should this one act on shininess, specular contribution, or something else ? I tried specular contribution and it doesn't look that bad...
(GD servers are difficult to reach for me as well... Seems their 5th anniversary upgrade is not as easy as expected)
In fact, I had most things working from the beginning except this OpenGL lighting parameter I didn't know about.
As for when/how calculating the binormal you have answered my question (I'm generating it in the VP right now, but changing this is extremely easy).
I now have PPL with and without textures and bump mapping. Only things missing are PPL on (animated) models, light attenuation and some kind of gloss map.
Hmmm, talking about gloss map, should this one act on shininess, specular contribution, or something else ? I tried specular contribution and it doesn't look that bad...
(GD servers are difficult to reach for me as well... Seems their 5th anniversary upgrade is not as easy as expected)
SaM3d!, a cross-platform API for 3d based on SDL and OpenGL.The trouble is that things never get better, they just stay the same, only more so. -- (Terry Pratchett, Eric)
The definition of a "gloss map" is not very clear, but it seems that it works as an attenuation of the specular contribution, so your way is the correct one. Moreover this is very easy to implement (just modulate the gloss map and the specular contribution) whereas changing the shininess is much more difficult, and either involves texture-depedant reads (as in the Rusty shader in the Radeon8500 OpenGL shader demo) or mandates for exponent computations in pixel shaders (very slow, if ever supported). However, per-pixel shininess exponents look very good and if you can deal with them in your engine and if artists can create models that use it correctly, you'll see the results are quite awesome. (That's part of the Unreal Engine 3.0 btw) Anyway there are so much neat shader effects available out there ... you have to make choices, and accept you won't be able to use them all, be it for performance reason (overall problem), for programming difficulty (programmer's issue), for modeling complexity (artists's motivation), for hardware compatibility (consumer's wallet common leak) or for driver stability (hardware vendor's hassle).
Thanks for your answer Vincoof !
Happily, the troubles you describe are those of a 3D gfx professional ; mine are only those of a hobbyist ;*)
BTW, I just did a test for the gloss map and didn't integrate it yet... I have 3 other questions :
- what is the problem with using it as a specular exponent map ? Since I compute specular lighting in the fragment shader (LIT instruction) I could use the texture fragment instead of light[0].specular.
- I was planning to integrate the gloss map as the alpha map of the normal map (and save a texture unit). Do you think it's a good/bad idea ?
- I have a geForce FX 5200 and get really bad FPS (~20 FPS @ 640x480 with nVidia's Linux driver). Should I panic or is the bad GFX card responsible for that ?
[Edited by - rodzilla on June 18, 2004 7:16:32 AM]
Happily, the troubles you describe are those of a 3D gfx professional ; mine are only those of a hobbyist ;*)
BTW, I just did a test for the gloss map and didn't integrate it yet... I have 3 other questions :
- what is the problem with using it as a specular exponent map ? Since I compute specular lighting in the fragment shader (LIT instruction) I could use the texture fragment instead of light[0].specular.
- I was planning to integrate the gloss map as the alpha map of the normal map (and save a texture unit). Do you think it's a good/bad idea ?
- I have a geForce FX 5200 and get really bad FPS (~20 FPS @ 640x480 with nVidia's Linux driver). Should I panic or is the bad GFX card responsible for that ?
[Edited by - rodzilla on June 18, 2004 7:16:32 AM]
SaM3d!, a cross-platform API for 3d based on SDL and OpenGL.The trouble is that things never get better, they just stay the same, only more so. -- (Terry Pratchett, Eric)
Quote: what is the problem with using it as a specular exponent map ? Since I compute specular lighting in the fragment shader (LIT instruction) I could use the texture fragment instead of light[0].specular
Yes you can, except the fact that the execution of the LIT instruction per pixel is going to kill your fill rate.
But to be sure of that, you should look at the instruction cost tables.
Quote: I was planning to integrate the gloss map as the alpha map of the normal map (and save a texture unit). Do you think it's a good/bad idea ?
Not only this is a good idea, but this is also what everyone does :)
Quote: I have a geForce FX 5200 and get really bad FPS (~20 FPS @ 640x480 with nVidia's Linux driver). Should I panic or is the bad GFX card responsible for that ?
Anything computed by a GeForce FX5200 is very slow, especially pixel shading stuff. Anyway if you want your application to run smoothly on all hardware (including GFFX5200) you should try to improve performance instead of taking for granted that "decent users will buy decent graphics cards".
There are many ways of improving performance. nVidia recently released a new programming guide for their hardware, available at developer.nvidia.com
[Edited by - vincoof on June 18, 2004 9:06:58 AM]
Quote: Original post by vincoofQuote: what is the problem with using it as a specular exponent map ? Since I compute specular lighting in the fragment shader (LIT instruction) I could use the texture fragment instead of light[0].specular
Yes you can, except the fact that the execution of the LIT instruction per pixel is going to kill your fill rate.
But to be sure of that, you should look at the instruction cost tables.
Is there a faster solution than the LIT instruction to get full-featured lighting ?
I'm pretty sure I can optimize my fragment shaders (using a normalization cube map for example), so I think I'll work on this...
SaM3d!, a cross-platform API for 3d based on SDL and OpenGL.The trouble is that things never get better, they just stay the same, only more so. -- (Terry Pratchett, Eric)
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement