Advertisement

Per pixel bump mapping with ATI pixel shader

Started by July 16, 2004 05:51 AM
14 comments, last by vincoof 20 years, 4 months ago
assuming that: GL_TEXTURE0_ARB is my normalized cube map GL_TEXTURE1_ARB is my normal map GL_TEXTURE2_ARB is my decal texture i wrote this shader to compute bump mapping, but doesnt' work :D glEnable(GL_FRAGMENT_SHADER_ATI); GLuint shader = glGenFragmentShadersATI(1); glBindFragmentShaderATI(shader); glBeginFragmentShaderATI(); glSampleMapATI(GL_REG_0_ATI, GL_TEXTURE0_ARB,GL_SWIZZLE_STR_ATI); glPassTexCoordATI(GL_REG_1_ATI, GL_TEXTURE1_ARB, GL_SWIZZLE_STR_ATI); glPassTexCoordATI(GL_REG_2_ATI, GL_TEXTURE2_ARB, GL_SWIZZLE_STR_ATI); glColorFragmentOp2ATI(GL_DOT3_ATI, GL_REG_0_ATI, GL_NONE, GL_NONE, GL_REG_1_ATI, GL_NONE, GL_NONE, GL_REG_0_ATI, GL_NONE, GL_NONE); glColorFragmentOp2ATI(GL_ADD_ATI, GL_REG_0_ATI, GL_NONE, GL_NONE, GL_REG_2_ATI, GL_NONE, GL_NONE, GL_REG_0_ATI, GL_NONE, GL_NONE); glEndFragmentShaderATI(); any advices?
<Disclaimer> I haven't done much in the way of bump mapping and I don't have an ATI card, so I don't know the extension, but looking through the spec here is one possibility I can think of</Disclaimer>

Assuming that you're passing the vector from the object to the light in texCoord[0] and standard tangent space vertex coords in texCoord[1] and texCoord[2] and that by 'decal texture' you mean 'diffuse texture' then I would assume you need to do the following:
glEnable(GL_FRAGMENT_SHADER_ATI);GLuint shader = glGenFragmentShadersATI(1);glBindFragmentShaderATI(shader);glBeginFragmentShaderATI();glSampleMapATI(GL_REG_0_ATI, GL_TEXTURE0_ARB,GL_SWIZZLE_STR_ATI);glPassTexCoordATI(GL_REG_1_ATI, GL_TEXTURE1_ARB, GL_SWIZZLE_STR_ATI);glPassTexCoordATI(GL_REG_2_ATI, GL_TEXTURE2_ARB, GL_SWIZZLE_STR_ATI);// add GL_SATURATE_BIT_ATI to clamp result to [0..1] and avoid backface lighting (assuming I'm understanding GL_SATURATE_BIT_ATI correctly).// EDIT: Does this actually matter?  Off the top of my head I think this is only necessary if you have an ambient term as well// or computer multiple lights in a single pass, to avoid suprious subtraction of light contributions.  Otherwise it will// just be clamped when written to the colour buffer regardless.glColorFragmentOp2ATI(GL_DOT3_ATI, GL_REG_0_ATI, GL_NONE, GL_SATURATE_BIT_ATI,GL_REG_1_ATI, GL_NONE, GL_NONE,GL_REG_0_ATI, GL_NONE, GL_NONE);// change GL_ADD_ATI to GL_MUL_ATI to modulate the light intensity with the diffuse texture.glColorFragmentOp2ATI(GL_MUL_ATI, GL_REG_0_ATI, GL_NONE, GL_NONE,GL_REG_2_ATI, GL_NONE, GL_NONE,GL_REG_0_ATI, GL_NONE, GL_NONE);glEndFragmentShaderATI();

If that doesn't help then I'm sure somebody who actually knows what they're doing will be along soon to help you out :P

Enigma
Advertisement
yes it works, thank you :D
I'm surprised it works.
There are many issues in the shader you wrote.

First, you're not likely to pass texture coordinates, so you should always sample from textures :
glSampleMapATI(GL_REG_0_ATI, GL_TEXTURE0_ARB,GL_SWIZZLE_STR_ATI);
glSampleMapATI(GL_REG_1_ATI, GL_TEXTURE1_ARB,GL_SWIZZLE_STR_ATI);
glSampleMapATI(GL_REG_2_ATI, GL_TEXTURE2_ARB,GL_SWIZZLE_STR_ATI);

Secondly, you should exapand the normals because they are compressed int the [0,1] range. To expand them to the [-1,+1] range, use 2X and BIAS :
glColorFragmentOp2ATI(GL_DOT3_ATI, GL_REG_0_ATI, GL_NONE, GL_SATURATE_BIT_ATI,
GL_REG_1_ATI, GL_NONE, GL_2X_BIT_ATI|GL_BIAS_BIT_ATI,
GL_REG_0_ATI, GL_NONE, GL_2X_BIT_ATI|GL_BIAS_BIT_ATI);

And at last you should SATURATE the result. I mean, the very last operation is the output color, so it should always be in the [0,1] range :
glColorFragmentOp2ATI(GL_ADD_ATI, GL_REG_0_ATI, GL_NONE, GL_SATURATE_BIT_ATI,
GL_REG_2_ATI, GL_NONE, GL_NONE,
GL_REG_0_ATI, GL_NONE, GL_NONE);

There you should get the same rendering that you had with ARB_texture_env_combine. So, why switching to ATI_fragment_shader ? Simply because now you can add *many* effects easily. With ARB_texture_env_combine you are pretty limited.
yes, now i can add many more effects than with the register combiners :D just now i've wrote the shader for specular light :)
Feel free to post screenshots when it's done ;)
Advertisement
i'll do, don't worry :)

but i still have a problem. It seems that i do some mistakes when i compute the tangent space for each vertex :/

i use a code similar to the Quake3 source on GameTutorials.com,
basically i have these structs:


struct tOURVertex{
CVector3 vPosition;
CVector2 vTextureCoord;
CVector3 vNormal;
};

struct tOURFace
{
int textureID;
int startVertIndex; // The starting index into this face's first vertex
int numOfVerts; // The number of vertices for this face
int meshVertIndex; // The index into the first meshvertex
int numMeshVerts; // The number of mesh vertices
CVector3 vNormal; // The face normal.
};

and in the main class i store an array of int for the indexes, and array of faces and an array of verts, so when i render i can use this:
glDrawElements(GL_TRIANGLES, pFace->numMeshVerts,GL_UNSIGNED_INT,&m_pIndexArray[pFace->meshVertIndex]);

Now i dunno how to compute the tangent space for each vertex :/
i've done many attempts but probably there's always an error that i don't see :°

I use an array that stores (for each vertex) three CVector3:
the tangent, the binomial and the 3d coords in tangent space.

when i load the map i call these functions:

void computeTangentsAndBinormals( void )
{
int size,start,VertStart;
int i1,i2,i3;

for(int j = 0; j < m_numOfFaces; j++ ) {
start = m_pFaces[j].meshVertIndex;
VertStart = m_pFaces[j].startVertIndex;
for(int i = 0 ; i < m_pFaces[j].numMeshVerts; i+=3 ){
i1 = m_pIndexArray[start + i];
i2 = m_pIndexArray[start + i + 1];
i3 = m_pIndexArray[start + i + 2];
computeTangentVector( i1, i2 , i3 );
}
}

for(int j = 0; j < m_numOfVerts; j++ ) {
BumpMap[j].Tangent.Normalize();
BumpMap[j].Binomial = Cross(Verts[j].Normal, BumpMap[j].Tangent);
BumpMap[j].Binomial.Normalize();

}

void computeTangentVector( int a,int b,int c)
{
CVector3 vAB = Verts - Verts[a];
CVector3 vAC = Verts[c] - Verts[a];

float dvAB = TexPos.y - TexPos[a].y;
float dvAC = TexPos[c].y - TexPos[a].y;

CVector3 Tangent = (vAB * dvAC) - (vAC * dvAB);
BumpMap[a].Tangent += Normalize(Tangent);
BumpMap.Tangent += Normalize(Tangent);
BumpMap[c].Tangent += Normalize(Tangent);
}

then, when i render a face i call:

void ShiftTextureCoords(int index,int start,CVector3 LightPosition)
{
//start is the index of the first vertex of the face
int end=start+numOfverts;
CVector3 vLightToVertex;

for( int i = start; i < end; ++i ){
vLightToVertex = LightPosition - m_pVerts.vPosition;
vLightToVertex.Normalize();

m_Bump.Coords.x = ( DotProduct(m_Bump.Tangent, vLightToVertex) );
m_Bump.Coords.y = ( DotProduct(m_Bump.Binomial, vLightToVertex) );
m_Bump.Coords.z = ( DotProduct(m_pVerts.vNormal, vLightToVertex) );
}
}

now someone PLEASE tell me where's the error :°°°


There are two stages in your algorithm :
first, compute the TBN triplet for each vertex, in a loading process.
secondly, transform the light vertex in tangent space at each vertex, changing every frame.

The first stage leads to the second, so you have to be aboslutely sure that the first one works, then the second.

To check the validity of the first one, I recommend to map the TBN triplet to the model's colour.
That is, call :
glColor3f(m_Bump.Tangent.x*0.5+0.5,m_Bump.Tangent.y*0.5+0.5,m_Bump.Tangent.z*0.5+0.5)
and disable all texturing, disable lighting, and disable fragment shaders (if it's too comlicated to diable all of that, then simply use your fragment shader to do this). This will test what tangents look like. Your model will be rendered with colour that represent the tangent direction : red-ish vectors point to the X direction, non-red vectors point to the negative X direction, green-ish vectors point the Y direction, etc.
If tangents are correct, check the normals and binormals using the same method.

And if the whole triplet is correct, check the light conversion in tangent space using the same method :
glColor3f(m_Bump.Coords.x*0.5+0.5,m_Bump.Coords.y*0.5+0.5,m_Bump.Coords.z*0.5+0.5);

PS: the multiplication by 0.5 and addition by 0.5 allows to map a vector from the [-1,+1] range to the [0,1] range, needed by colours. Note that you probably need to normalize you light vector before sending it as a colour like that.
if i do as you said, when i render i see:
for tangents, every face is quite green,
for binomials, every face is quite violet,
for normals, every face is quite blue.

i'm becoming mad...

On which model do you compute the triplet ? Is it a model that has lots of orientations (like a player model) or a model that has a preferred orientation (an almost flat surface like a floor) ?
The way you describe tangent, normals and binormals look like either you're in the second case (and this would be the correct behaviour), or that your computations lead to a privileged direction (and this would not be correct).

This topic is closed to new replies.

Advertisement