I think I have just what you're looking for.
My YouTube channel covers basically what you're asking including a detailed walk through of Blinn Phong shading in my HLSL series. I walk through the all the math. I find Blinn Phong math to be a little ugly. I can tell you off the top of my head the basic idea of how it works, but to get into the specific math and why the math works requires me to relearn it every time I get into it. That's another reason I did the HLSL series, so that I could remind myself why it works the way it does when I need to. It's the Phong specular that's the ugly part mathematically. The Gouraud shading is actually super straight forward.
I start off with Vector and Matrix videos and draw a lot of it so you can visualize it. Then it gets into the HLSL series. The Vector and Matrix videos are not specific to any framework, SDK, or computer language. The HLSL series is obviously HLSL and you're working in GLSL. I've been meaning to do a video that ties GLSL into the HLSL series. The math is the same in both but the two languages have pretty different syntax.
Here's my GLSL shader that's almost exactly the same as my HLSL shader that you have by the end of the HLSL series.
Vertex Shader
#version 450 core
layout (location = 0) in vec3 Pos;
layout (location = 1) in vec2 UV;
layout (location = 2) in vec3 Normal;
layout (location = 3) in vec4 Color;
uniform mat4 WorldMatrix;
uniform mat4 ViewMatrix;
uniform mat4 ProjectionMatrix;
smooth out vec2 TextureCoordinates;
smooth out vec3 VertexNormal;
smooth out vec4 RGBAColor;
smooth out vec4 PositionRelativeToCamera;
out vec3 WorldSpacePosition;
void main()
{
gl_Position = WorldMatrix * vec4(Pos, 1.0f); //Apply object's world matrix.
WorldSpacePosition = gl_Position.xyz; //Save the position of the vertex in the 3D world just calculated. Convert to vec3 because it will be used with other vec3's.
gl_Position = ViewMatrix * gl_Position; //Apply the view matrix for the camera.
PositionRelativeToCamera = gl_Position;
gl_Position = ProjectionMatrix * gl_Position; //Apply the Projection Matrix to project it on to a 2D plane.
TextureCoordinates = UV; //Pass through the texture coordinates to the fragment shader.
VertexNormal = mat3(WorldMatrix) * Normal; //Rotate the normal according to how the model is oriented in the 3D world.
RGBAColor = Color; //Pass through the color to the fragment shader.
};
Pixel/Fragment Shader
#version 450 core
in vec2 TextureCoordinates;
in vec3 VertexNormal;
in vec4 RGBAColor;
in float FogFactor;
in vec4 PositionRelativeToCamera;
in vec3 WorldSpacePosition;
layout (location = 0) out vec4 OutputColor;
uniform vec4 AmbientLightColor;
uniform vec3 DiffuseLightDirection;
uniform vec4 DiffuseLightColor;
uniform vec3 CameraPosition;
uniform float SpecularPower;
uniform vec4 FogColor;
uniform float FogStartDistance;
uniform float FogMaxDistance;
uniform bool UseTexture;
uniform sampler2D Texture0;
vec4 BlinnSpecular(in vec3 LightDirection, in vec4 LightColor, in vec3 PixelNormal, in vec3 CameraDirection, in float SpecularPower)
{
vec3 HalfwayNormal;
vec4 SpecularLight;
float SpecularHighlightAmount;
HalfwayNormal = normalize(LightDirection + CameraDirection);
SpecularHighlightAmount = pow(clamp(dot(PixelNormal, HalfwayNormal), 0.0, 1.0), SpecularPower);
SpecularLight = SpecularHighlightAmount * LightColor;
return SpecularLight;
}
vec4 PhongSpecular(in vec3 LightDirection, in vec4 LightColor, in vec3 PixelNormal, in vec3 CameraDirection, in float SpecularPower)
{
vec3 ReflectedLightDirection;
vec4 SpecularLight;
float SpecularHighlightAmount;
ReflectedLightDirection = 2.0 * PixelNormal * clamp(dot(PixelNormal, LightDirection), 0.0, 1.0) - LightDirection;
SpecularHighlightAmount = pow(clamp(dot(ReflectedLightDirection, CameraDirection), 0.0, 1.0), SpecularPower);
SpecularLight = SpecularHighlightAmount * LightColor;
return SpecularLight;
}
void main()
{
vec3 LightDirection;
float DiffuseLightPercentage;
vec4 SpecularColor;
vec3 CameraDirection; //Float3 because the w component really doesn't belong in a 3D vector normal.
vec4 AmbientLight;
vec4 DiffuseLight;
vec4 InputColor;
if (UseTexture)
{
InputColor = texture(Texture0, TextureCoordinates);
}
else
{
InputColor = RGBAColor; // vec4(0.0, 0.0, 0.0, 1.0);
}
LightDirection = -normalize(DiffuseLightDirection); //Normal must face into the light, rather than WITH the light to be lit up.
DiffuseLightPercentage = max(dot(VertexNormal, LightDirection), 0.0); //Percentage is based on angle between the direction of light and the vertex's normal.
DiffuseLight = clamp((DiffuseLightColor * InputColor) * DiffuseLightPercentage, 0.0, 1.0); //Apply only the percentage of the diffuse color. Saturate clamps output between 0.0 and 1.0.
CameraDirection = normalize(CameraPosition - WorldSpacePosition); //Create a normal that points in the direction from the pixel to the camera.
if (DiffuseLightPercentage == 0.0f)
{
SpecularColor = vec4(0.0f, 0.0f, 0.0f, 1.0f);
}
else
{
//SpecularColor = BlinnSpecular(LightDirection, DiffuseLightColor, normalize(VertexNormal), CameraDirection, SpecularPower);
SpecularColor = PhongSpecular(LightDirection, DiffuseLightColor, normalize(VertexNormal), CameraDirection, SpecularPower);
}
float FogDensity = 0.01f;
float LOG2 = 1.442695f;
float FogFactor = exp2(-FogDensity * FogDensity * PositionRelativeToCamera.z * PositionRelativeToCamera.z * LOG2);
FogFactor = 1 - FogFactor;
//float FogFactor = clamp((FogMaxDistance - PositionRelativeToCamera.z)/(FogMaxDistance - FogStartDistance), 0.0, 1.0);
OutputColor = RGBAColor * (AmbientLightColor * InputColor) + DiffuseLight + SpecularColor;
OutputColor = mix (OutputColor, FogColor, FogFactor);
};
I tried to write these shaders out in a language that makes it pretty self evident what they're doing, which is pretty uncommon, especially in the Blinn-Phong part.
There's actually two different shaders: Phong is the original shader as I recall and Blinn came up with the half-vector idea to "improve" on Phong's idea. That's why it's known as Blinn-Phong, because it's basically the same shader with slightly different math. I have both shaders here written as functions. So, you can comment out the one you don't want to use.
Also, I added fog to this GLSL shader, which is something I didn't cover in the HLSL series. There are a couple different ways to do fog mathematically.
I've also been meaning to put together a high level video to explain that you feed the vertex shader a buffer of vertices that define your model which outputs the altered vertices to the rasterizer (and stages I don't use like geometry and tesselation). At that stage my shader converts the vertices from the 3D game world to 2D normalized device coordinates that can be drawn on the computer screen and the next stages are actually done in 2D. Rasterization is mostly done for you and shades the area in between every 3 vertices to form triangles that look 3D on the 2D drawing surface. As it does this it sends every pixel to the fragment/pixel shader and allows you to alter the color of that individual pixel. So, your output there is a single pixel color and nothing more. There is a blend stage after that that you can set the state of but otherwise don't really write into your shaders. Then the back buffer is presented to the screen and you see the image. The whole thing is about drawing triangles.
Anyway, the way they usually write out Phong, it's nearly impossible to understand. I've written it out here to be as understandable as possible and it's still pretty ugly:
vec4 PhongSpecular(in vec3 LightDirection, in vec4 LightColor, in vec3 PixelNormal, in vec3 CameraDirection, in float SpecularPower)
{
vec3 ReflectedLightDirection;
vec4 SpecularLight;
float SpecularHighlightAmount;
ReflectedLightDirection = 2.0 * PixelNormal * clamp(dot(PixelNormal, LightDirection), 0.0, 1.0) - LightDirection;
SpecularHighlightAmount = pow(clamp(dot(ReflectedLightDirection, CameraDirection), 0.0, 1.0), SpecularPower);
SpecularLight = SpecularHighlightAmount * LightColor;
return SpecularLight;
}
You're basically determining the color and thus the amount of light from the specular light to apply to every pixel individually. It gets a percentage of the light based on how much the camera aligns with the angle that the light is reflecting off of the pixel at. (Imagine the pixel in 3D space and it's normal is the direction it faces.)
You asked about this:
vec3 ambient = Light.La * Material.Ka
The code you provided seems to be cut off a bit prematurely and it's written in a way that's a bit difficult to follow. But it would appear that you have a light intensity color and a light reflectivity color for the ambient light. Multiplying two colors is going to basically give you the weaker of the two. Imagine white being all 1's and the other color being something like 0.4's and when you multiply 0.4 times 1.0 you will get 0.4. So, it's blending the two colors in a way that is weighted to the weaker color. Adding colors tends to blow them out because the sum would be 1.4 and 1.0 is white. You can't get any more white than white. But there are some places where you add them.
Ambient light is super straight forward. It's just a color and you're done. Here they appear to be complicating it a bit by giving you some control over the brightness of that color and it's unclear what they mean by "reflectivity". There is no reflectivity in Gouraud shading. They seem to be using it to control brightness of the ambient color. I just specify a color that in theory takes all that into account. When I originally specify the color I can say how bright I want the color to be and then just hand the shader the color. Ambient light is just a color that gets applied to everything. By itself it results in a silhouette shader and it doesn't look any more 3D than a shadow.
With this line:
vec3 tnorm = normalize( NormalMatrix * VertexNormal);
It's confusing again, especially since I think we're in the fragment/pixel shader here. So first off, we're dealing with a pixel, not a vertex at this point. Assuming that's the case, the values in the fragment/pixel shader are interpolated across the face of the triangle that you're drawing. So, colors are interpolated (given a weighted average dependent on distance from each vertex) to blend them between the three vertices of the triangle. So, the fragment/pixel shader gets the interpolated/averaged value. In this case, you get a "pixel normal" from the vertex normal here. It's an interpolated vertex normal between the 3 vertex normals of the 3 vertices of the triangle. It's a weighted average of the directions the 3 vertices face in based on distance from each vertex to give you the direction the pixel faces.
It doesn't explain what a "normal matrix" is. So, I can only guess.
Probably what's going on here is that your vertex normals were defined in your modeling program like Maya, Max, or Blender. They were imported as a file where the normals and vertices are in the positions and directions defined before you exported the model file. They call that "model space". Your world/object matrix modified those vertex positions to place and orient the model in the 3D world. The view matrix further modified them to simulate a camera moving through the scene. And the Projection matrix modified the vertex positions to project them onto the 2D screen as normalized device coordinates for actual drawing. They call that the MVP matrix in your code there. They give you those matrices in different levels of being combined. You can see how I handle that in my shader which is a bit different.
But I believe the "normal matrix" is probably due to the fact that all these matrices changed the positions of the vertices, not their normals. So, the normal for the vertices still points in the direction it was in your modeling program before you exported the file, which is not helpful at all. You need to apply the world matrix to those normals to reorient them according to how they were placed in the scene so that they point in the correct directions. That's probably what that is: the World matrix.
Oh sorry. They call it the ModelViewProjection Matrix rather than the WorldViewProjection Matrix. 6 of one and half a dozen of the other. Model, Object, World, it doesn't matter what you call it; it's the matrix that positions the model in the scene.
Anyway, I suggest going through my HLSL series. It won't hurt you to learn HLSL since some of the books are going to have HLSL instead of GLSL. The math is all the same either way. You can probably skip the first video as it is just talking about the calling code, which will be different from OpenGL anyway. But you can look at the GLSL shader above to compare it to what's in the HLSL video.
Off the top of my head, the primary difference is that HLSL calls them constant buffers and GLSL's Uniforms appear to be basically the same thing. You don't deal with individual registers in GLSL either, which was always the semi-confusing part of HLSL and thus kind of nice. GLSL calls it a fragment shader and HLSL calls it a pixel shader. The way you tell the shader how your vertex buffer is laid out is slightly different between the two. Other than that it's mostly the same thing with just slightly different syntax. Having the GLSL version of the exact same shader given above should help bring the two together.
I have the entire project and source code in OGL 4.5 here. So, there you can see the OGL code that calls the shader as well. I haven't had time to comment and cleanup that code as much as I would like, but it still should be somewhat self documenting. On the web page there, there is a link to a video that shows what the program looks like running that uses the shader above.