Advertisement

GPU fragment writting order with face culling off

Started by November 01, 2020 05:30 AM
6 comments, last by ddlox 4 years, 3 months ago

Hello guys I've a quick question and I can't seem to find a concrete answer so I decided to ask here. First, let's suppose the following configuration on the later stages of the rendering pipeline:

  • Face culling is disabled, so both sides of a triangle get rasterized
  • Depth testing is enabled and configured to preserve anything with equal or less distance from camera
  • Depth writing also enabled 
  • No stencil involved
  • No blending

We render a triangle on the screen, with the camera looking straight into it. The triangle goes through the rasterizer, fragments get generated for each side (since face culling is disabled). All fragments pass the early depth test due to the depth test config mentioned above (right?). They go through the fragment shader and later they get written into the color buffer… My question is, how is the order of these fragments decided when they are being written? 

I feel like I'm missing something in the theory. If I've the following fragment shader (hlsl):

fixed4 frag (fixed facing : VFACE) : SV_Target
{

  if(facing > 0)
  {
     return fixed4(1,0,0,1); // front green
  }
  else
  {
     return fixed4(0,1,0,1); // back red
  }
}

The VFACE semantic is > 0  for front facing or <0 for back facing, and it yields the following result:

So I'm scratching my head here thinking that the fragments of both sides get written in the color buffer but there is something that makes sure that, if looking from the front side, all the green fragments get written last, or the red fragments get discarded before the green… Or I'm forgetting something important.

Is my reasoning ok? Thanks!

Each face only gets rasterized once, there's no need to rasterize the side of the face that's pointed away from the camera since you can't see it. So there's no order to resolve here, you will either get only green fragments or only red fragments in this case (never both).

Advertisement

Ah that makes a lot of sense, thanks!

this is exactly what the z-buffer algorithm is about. This algorithm is so easy that it is implemented by all modern graphics card.

the zbuffer algo goes like this:

//pseudo
// set depth of each screen pixel: 
depth[m][n] = 10000000

// set colour of each screen pixel: 
colour[m][n] = black or whatever

// now for each polygon the red one and the green one (the order doesn't matter) , do this:

foreach polygon's screen pixel:
{
   // take the z value of (x, y, z) which is projected to pixel [m][n]
    
    if (z < depth[m][n])
    {
       colour[m][n] = polygon colour
       depth[m][n] = z;
    }
}

you see if you run this algo as described here, from the same camera location, you will end up with either the red or green polygon onscreen depending on which polygon was used first, that's because both polygons have the same z values. So when i said the order doesn't matter, “it means the algorithm doesn't care what polygon is used first or last to do its comparisons”, unfortunately visually, we could end up with the wrong polygon onscreen.

So the algo itself is therefore incomplete, it needs another piece of information to discard “wrong” polygons. This piece of information is provided to us by The Hidden Surface or Polygon Removal algo. It is also very simple and consists in using a polygon normal dot-producted with the camera view vector. If the result of this dot product is greater than 0 then we take this polygon and feed it to the zbuffer algo, if not then we get rid of it

// pseudo 
// cam view vec4 C = (1, 2, 3, 0)
// green poly norm vec4 G = (0,0,1,d)
// red poly normal vec4 R= (0,0,-1,d)
// d is the same for both because theyr on the same plane but have opposing normal Z
float red_dot_p = R.C = -3
float green_dot_p = G.C = 3

so in this example, its dot product is positive, green poly will be fed to the zbuffer and will be visible and the red one doesn't even need to be fed to the zbuffer.

if u moved the camera to the opposite side such that C.z = -3 and do the dot products again you will see that the red dp is 3 and green dp -3, so this time round the red poly is passed to zbuffer and will be visible and not the green one.

That's it .. have fun ?

@ddlox Thanks for the detailed explanation! I do have one more question now…regarding the second algorithm, what happens in my case where there is a single triangle and both front and back faces are being culled? There is only one normal per triangle…so I guess that the rasterizer inverts the normal depending on which side it is working on? Like, for front faces it uses the triangle normal and for back faces takes the oposite normals

This actually has nothing to do with z buffers or depth testing. You would get the same exact result in this case even if you disabled depth testing entirely. It's really just about how the rasterizer handles triangles and face culling, like I alluded to either.

You really shouldn't think of there as being “two faces” to a triangle here. As far as the rasterizer is concerned there's only 1 face per triangle, the only question is whether or not that triangle gets culled due to winding order and culling rules (while you can do a backface/frontface test using a dot product with the view vector as described, hw rasterizers don't do it that way: they do it based on whether the vertices are clockwise or counter-clockwise, which can be calculated with a cross product of the edge vectors). This is what the rasterization process (very) roughly looks like in pseudocode:

for(triangle : triangles)
{
    windingOrder = CalcWindingOrder(triangle);
    
    if(rasterizerState.FrontCounterClockwise)
        frontFacing = windingOrder == CounterClockWise;
    else
        frontFacing = windingOrder == ClockWise;

    cull = false;
    if(rasterizerState.CullMode = CullBack)
        cull = (frontFacing == false);
    else if(rasterizerState.CullMode = CullFront)
        cull = frontFacing;
    
    if(cull)
        continue;

    pixels = GetCoveredPixels(triangle);
    for(pixel : pixels)
        pixel = ExecutePixelShader(InterpolateAttributes(triangle), frontFacing);
}
Advertisement

if u have 1 triangle and both sides are culled and you only have 1 normal, that means any of these:

  • this triangle is always facing away from the camera irrespective of where the camera is, so both the view vector and the triangle normal are pointing in the "same" direction
  • the winding order of vertices which define the triangle side/face is taken as the order to cull faces away and both sides of this triangle are defined with verts in such order (probably not what u want to do)
  • ?

the rasterizer does not invert, it only rasterizes: it will generate the screen scanlines of triangles using interpolation methods

so what the rasterizer draws is if u like “what was decided earlier” in yr culling stage

That's it … all the best ?

This topic is closed to new replies.

Advertisement