Alternate DOT3 calculation??
I just want to get an expert opinion to see if this is possible as I''m a little shaky on the math.
I went through the emboss bump mapping tutorial and I understand it pretty well, and I want to convert my emboss bump mapping code into dot3 bump mapping.
I did the emboss bump mapping slightly different. I used vertex arrays intead of going through point by point. To offset the texture, all I did was do a glTranslatef() on the texture matrix using the texture offset values that I got from my calculations.
So, I already have the texture offset values. So, can I do the same thing with the dot3 bump mapping when applying the normalization cube map? So, instead of going through each time and giving it the exact texture coordinates, can I translatef() inside of the texture matrix?
Legends Development Team
Legends Development Team
Even though DOT3 and Emboss both are bump-mapping algorithms, they are really different in practice. The only common point is that both algorithms depend on the surface ''unbumped'' normal and the light position.
Emboss Bump-mapping consists in computing a texture offset, so that you can add/subtract a grayscale texture using that offset.
DOT3 Bump-mapping does really compute lighting equations using the DOT3 operation, because OpenGL diffuse and specular lighting are mostly based on dot products. DOT3 bump-mapping does not "translatef() inside of the texture matrix" because computations are not based on texture offsets like Emboss does.
Emboss Bump-mapping consists in computing a texture offset, so that you can add/subtract a grayscale texture using that offset.
DOT3 Bump-mapping does really compute lighting equations using the DOT3 operation, because OpenGL diffuse and specular lighting are mostly based on dot products. DOT3 bump-mapping does not "translatef() inside of the texture matrix" because computations are not based on texture offsets like Emboss does.
Ok, that settled, I have the horrendous job of learning how to do dot3 bump mapping.
Here''s the situation:
I followed the tutorial recently posted on flipcode, but I can''t get the normalization cube map to work in the game that I''m working on (see this sig - not the first one). However, the lighting is unidirectional, and apparently, there is no need for a normalization cube map at all.
However, every single place I look for an example, everyone uses a normalization cube map to dot3 with the normal texture. So, if I don''t dot3 with the cube map, what DO I dot3 with?
It would also REALLY help to know a few other things:
1. are s-tangents and t-tangents (binormals) just the s and t texture coordinates when defined in tangent space?
2. If there are no transformations to the modelview matrix, is the inverse modelview matrix the same as the regular modelview matrix?
3. If it is, do I still have to multiply by the light vector to get the coordinates in object space?
4. What calls to glTexEnvi() do I need to accomplish dot3 bump mapping without a normalization cube map?
Legends Development Team
Here''s the situation:
I followed the tutorial recently posted on flipcode, but I can''t get the normalization cube map to work in the game that I''m working on (see this sig - not the first one). However, the lighting is unidirectional, and apparently, there is no need for a normalization cube map at all.
However, every single place I look for an example, everyone uses a normalization cube map to dot3 with the normal texture. So, if I don''t dot3 with the cube map, what DO I dot3 with?
It would also REALLY help to know a few other things:
1. are s-tangents and t-tangents (binormals) just the s and t texture coordinates when defined in tangent space?
2. If there are no transformations to the modelview matrix, is the inverse modelview matrix the same as the regular modelview matrix?
3. If it is, do I still have to multiply by the light vector to get the coordinates in object space?
4. What calls to glTexEnvi() do I need to accomplish dot3 bump mapping without a normalization cube map?
Legends Development Team
Legends Development Team
Off topic, but first of all, let me congratulate you for covering the topic so well. You really seems to have good knowledge for being able to implement bump-mapping. I''m not sacarstic, but so many times n00bs tell "hey man bum-mappin'' looks great I want to do the same" without even knowing what is the difference between object space and eye space. Not that they''re stupid, they just don''t know what awaits them. I''m glad that you already read some articles discussing the topic ! It''ll help alot.
oh well, I''m *really* off topic now...
The "normalization cubemap" maps a 3D texture coord to an RGB triplet that encodes normalized vectors, compressed into the range [0,1]. Then you can use this texture output to "DOT3" with some other vector (also compressed into the range [0,1]). Without a normalization cubemap, you have to replace this texture lookup with some other data, be it another texture lookup or a color. At this point I wouldn''t like to go further into solutions before knowing which bump-mapping algorithm you use, and which hardware is targeted. So, I have a few questions for you :
a. which lighting components do you want to render with bump-mapping : diffuse, specular or both ?
b. do you use a normal map or a perturbation map ?
c. in which space do you compute bump-mapping ? object space ? TBN space (aka texture space or tangent space) ? world space ? eye space ?
d. which OpenGL extensions do you plan on using, apart from GL_ARB_texture_env_dot3 ?
e. could you please post a short description of the equation you''re using per-pixel ? (or a link to the website/pdf you found it) ?
Anyway, why do you want to skip the normalization cubemap ? Do you want to save a texture unit ? Do you expect better performance ? If you think that you don''t need to normalize (because the light is infinite), I think should answer to the above question (c) at the very least.
Also, some answers :
1. they''re not really s and t exactly, but the s and t texture coordinates help alot. In the "common" sense of texturing, the s and t directions represent the Tangent and Binormal vectors.
2. if you mean "identity matrix", then yes the inverse modelview is the same as the modelview itself.
3. if the modelvioew matrix is identity, then eye space and object space are similar, so yes that''s right you can skip the light vector transformation (if the light vector is stored in eye space coordinates). Though, you''re in a pretty special case if your modelview matrix is an identity. Also, if you think that you only have to call glLoadIdentity() to make computations simple, let me tell you that it''s not a very good solution
4. depends on what replaces the cubemap
oh well, I''m *really* off topic now...
The "normalization cubemap" maps a 3D texture coord to an RGB triplet that encodes normalized vectors, compressed into the range [0,1]. Then you can use this texture output to "DOT3" with some other vector (also compressed into the range [0,1]). Without a normalization cubemap, you have to replace this texture lookup with some other data, be it another texture lookup or a color. At this point I wouldn''t like to go further into solutions before knowing which bump-mapping algorithm you use, and which hardware is targeted. So, I have a few questions for you :
a. which lighting components do you want to render with bump-mapping : diffuse, specular or both ?
b. do you use a normal map or a perturbation map ?
c. in which space do you compute bump-mapping ? object space ? TBN space (aka texture space or tangent space) ? world space ? eye space ?
d. which OpenGL extensions do you plan on using, apart from GL_ARB_texture_env_dot3 ?
e. could you please post a short description of the equation you''re using per-pixel ? (or a link to the website/pdf you found it) ?
Anyway, why do you want to skip the normalization cubemap ? Do you want to save a texture unit ? Do you expect better performance ? If you think that you don''t need to normalize (because the light is infinite), I think should answer to the above question (c) at the very least.
Also, some answers :
1. they''re not really s and t exactly, but the s and t texture coordinates help alot. In the "common" sense of texturing, the s and t directions represent the Tangent and Binormal vectors.
2. if you mean "identity matrix", then yes the inverse modelview is the same as the modelview itself.
3. if the modelvioew matrix is identity, then eye space and object space are similar, so yes that''s right you can skip the light vector transformation (if the light vector is stored in eye space coordinates). Though, you''re in a pretty special case if your modelview matrix is an identity. Also, if you think that you only have to call glLoadIdentity() to make computations simple, let me tell you that it''s not a very good solution

4. depends on what replaces the cubemap

Ok, though I AM still a newbie at this...it's just so hard to learn it with so few USEABLE resources to learn from.
To answer your questions:
First, I want to learn HOW to do it in a simple model. So, I'm just going to use nehe's base code.
Second, I want to dot3 bump map the terrain in my upcoming game which uses the torque game engine. Basically, the torque's terrain renderer is pretty foreign as I havn't had too much experience with C++ or opengl. (I know what I'm doing in both areas, but I'm no expert in either)
So, the problem is that I tried to implement it in torque first, but I couldn't get the cube map for some reason, all I'd get on the terrain was the normal map. So, I was asking around in irc, and someone said that you don't need a cube map if the light source is static and infinite (although, the light may be dynamic in later months, so I'll have to take that into account).
But first, I want to learn it, and the tutorial I'm using can be found at flipcode if you scroll down a couple lines in the news. This tutorial uses a normalization cube map for the dot3 product.
So, in the tutorial, it's all done in tangent space (as far as I know - this is the first time i'm touching other spaces besides modelview). But, I'm not sure what torque uses. I know the terrain is a heightfield, and I think all the transformations (if any) are in world coordinates. I really don't know the structure of the coordinate spaces as it gets VERY complicated. It uses it's own coordinate system for some things that switches the y and z axes, so it gets VERY confusing. The terrain is also rendered using vertex arrays, in chunks, and I THINK the same vertex list is used for every chunk. Oh, and the chunks get bigger as you get further away just to complicate things even more :
Anyway, I'm going through the flipcode tutorial, and I can't figure out what I'm doing wrong. I'll post a link to all the stuff you need so maybe you can run it and give me a hint as to what I'm doing wrong. As far as I know, it's almost identical.
EDIT: Here's the (screwed up) program. (and source because I'm too lazy to remove it)
And, run Lesson22.exe, and if you press m to turn off multitexturing, it goes back to the emboss bump mapping. It doesn't show the decal yet because that just complicates things
Legends Development Team
[edited by - Hobbiticus on December 17, 2002 5:49:19 PM]
To answer your questions:
First, I want to learn HOW to do it in a simple model. So, I'm just going to use nehe's base code.
Second, I want to dot3 bump map the terrain in my upcoming game which uses the torque game engine. Basically, the torque's terrain renderer is pretty foreign as I havn't had too much experience with C++ or opengl. (I know what I'm doing in both areas, but I'm no expert in either)
So, the problem is that I tried to implement it in torque first, but I couldn't get the cube map for some reason, all I'd get on the terrain was the normal map. So, I was asking around in irc, and someone said that you don't need a cube map if the light source is static and infinite (although, the light may be dynamic in later months, so I'll have to take that into account).
But first, I want to learn it, and the tutorial I'm using can be found at flipcode if you scroll down a couple lines in the news. This tutorial uses a normalization cube map for the dot3 product.
So, in the tutorial, it's all done in tangent space (as far as I know - this is the first time i'm touching other spaces besides modelview). But, I'm not sure what torque uses. I know the terrain is a heightfield, and I think all the transformations (if any) are in world coordinates. I really don't know the structure of the coordinate spaces as it gets VERY complicated. It uses it's own coordinate system for some things that switches the y and z axes, so it gets VERY confusing. The terrain is also rendered using vertex arrays, in chunks, and I THINK the same vertex list is used for every chunk. Oh, and the chunks get bigger as you get further away just to complicate things even more :
Anyway, I'm going through the flipcode tutorial, and I can't figure out what I'm doing wrong. I'll post a link to all the stuff you need so maybe you can run it and give me a hint as to what I'm doing wrong. As far as I know, it's almost identical.
EDIT: Here's the (screwed up) program. (and source because I'm too lazy to remove it)
And, run Lesson22.exe, and if you press m to turn off multitexturing, it goes back to the emboss bump mapping. It doesn't show the decal yet because that just complicates things

Legends Development Team
[edited by - Hobbiticus on December 17, 2002 5:49:19 PM]
Legends Development Team
Since I''m now under linux and too lazy to copy''n''paste the source code to some already working linux port of NeHe tutorials, I won''t be able to exec the program before Friday. Thanks for the zip file anyway.
If you''re not initializing the normalized cubemap correctly, I have some sample source code that does it correctly.
Even if the light is infinite, you have to use a normalization cubemap because the light is static in eye-space, but is dynamic in tangent-space !
Normalization cubemap :
Huge thanks to Mark J. Kilgard (who wrote one of the most excellent bump-mapping article, which can be found at http://developers.nvidia.com under the name "Robust Bump-Mapping for Todays Hardware" (or something like that)).
Once the above function is defined, call :
glBindTexture(GL_TEXTURE_CUBE_MAP_EXT, normalized_vector_cube);
makeNormalizeVectorCubeMap(vector_cube_size);
where normalized_vector_cube is initialized by glGenTextures and vector_cube_size is a power of two that represents the cubemap size (typically 16 is okay).
If you''re not initializing the normalized cubemap correctly, I have some sample source code that does it correctly.
Even if the light is infinite, you have to use a normalization cubemap because the light is static in eye-space, but is dynamic in tangent-space !
Normalization cubemap :
static void getCubeVector(int i, int cubesize, int x, int y, float *vector) { float s, t, sc, tc, mag; s = ((float)x + 0.5) / (float)cubesize; t = ((float)y + 0.5) / (float)cubesize; sc = s*2.0 - 1.0; tc = t*2.0 - 1.0; switch (i) { case 0: vector[0] = 1.0; vector[1] = -tc; vector[2] = -sc; break; case 1: vector[0] = -1.0; vector[1] = -tc; vector[2] = sc; break; case 2: vector[0] = sc; vector[1] = 1.0; vector[2] = tc; break; case 3: vector[0] = sc; vector[1] = -1.0; vector[2] = -tc; break; case 4: vector[0] = sc; vector[1] = -tc; vector[2] = 1.0; break; case 5: vector[0] = -sc; vector[1] = -tc; vector[2] = -1.0; break; } mag = 1.0/sqrt(vector[0]*vector[0] + vector[1]*vector[1] + vector[2]*vector[2]); vector[0] *= mag; vector[1] *= mag; vector[2] *= mag; } int makeNormalizeVectorCubeMap(int size) { float vector[3]; int i, x, y; GLubyte *pixels; pixels = new GLubyte[size*size*3]; if (pixels == NULL) { return 0; } glTexParameteri(GL_TEXTURE_CUBE_MAP_EXT, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP_EXT, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR); for (i = 0; i < 6; i++) { for (y = 0; y < size; y++) { for (x = 0; x < size; x++) { getCubeVector(i, size, x, y, vector); pixels[3*(y*size+x) + 0] = 128 + (GLubyte)(127*vector[0]); pixels[3*(y*size+x) + 1] = 128 + (GLubyte)(127*vector[1]); pixels[3*(y*size+x) + 2] = 128 + (GLubyte)(127*vector[2]); } } glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X_EXT+i, 0, GL_RGB8, size, size, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels); } delete[] pixels; return 1; }
Huge thanks to Mark J. Kilgard (who wrote one of the most excellent bump-mapping article, which can be found at http://developers.nvidia.com under the name "Robust Bump-Mapping for Todays Hardware" (or something like that)).
Once the above function is defined, call :
glBindTexture(GL_TEXTURE_CUBE_MAP_EXT, normalized_vector_cube);
makeNormalizeVectorCubeMap(vector_cube_size);
where normalized_vector_cube is initialized by glGenTextures and vector_cube_size is a power of two that represents the cubemap size (typically 16 is okay).
Haha, and THAT would be the problem. Your implementation of creating the normalization cube map worked perfectly.
Now I only have one more problem: Only the front and back sides are lighting up properly. Is this due the same normal map being on all 6 sides of the cube? Is there anything I can to do to correct this?
Now I only have one more problem: Only the front and back sides are lighting up properly. Is this due the same normal map being on all 6 sides of the cube? Is there anything I can to do to correct this?
Legends Development Team
Ok, now I'm going to start to implement it in torque. And, I've already ran into a problem 
when i get the modelview matrix, it comes back as an array of length 16. But, torque has a MatrixF class (matrix float) that actually has rows and columns. So, which numbers do I put where?
EDIT:
Ok, what i'm doing is putting elements 0-3 in row 1, 4-7 in row 2, etc, so idk if that's the problem
But, I ALMOST have it, but the bumps are changing according to the rotation of the camera. What exactly does this mean?
Ok, apparently, I wasn't multiplying the light position with the inverse modelview matrix for some reason. Now all I have is black lines appearing because my texture coord array is screwed up
Legends Development Team
[edited by - Hobbiticus on December 18, 2002 2:43:04 PM]

when i get the modelview matrix, it comes back as an array of length 16. But, torque has a MatrixF class (matrix float) that actually has rows and columns. So, which numbers do I put where?
EDIT:
Ok, what i'm doing is putting elements 0-3 in row 1, 4-7 in row 2, etc, so idk if that's the problem
But, I ALMOST have it, but the bumps are changing according to the rotation of the camera. What exactly does this mean?
Ok, apparently, I wasn't multiplying the light position with the inverse modelview matrix for some reason. Now all I have is black lines appearing because my texture coord array is screwed up
Legends Development Team
[edited by - Hobbiticus on December 18, 2002 2:43:04 PM]
Legends Development Team
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement