Advertisement

Backface culling and perspective transformation

Started by July 11, 2000 08:57 AM
5 comments, last by TPH 24 years, 5 months ago
I''ve been writing some programs that do simple things (got to start somewhere) in order to render a 3D image. Nothing special, just a cube for now. In most textbooks, polygon normals are transformed into viewspace and backface culling is done then. But, if I do that, then apply the perspective transformation, the normals are pointing in a slightly wrong direction. So, I transform the normals into perspective as well, but then they are in homogenous coordinates and need to have the division by W in order to cull the polygons. This works, but is way too inefficient. Any ideas as to what should I be doing?
You should not confuse 'normals' used for lightning with the polygon normal defined by polygon vertices.

Neither normals are transformed into projection.

Lightning normals are transformed only if you apply a transformation to that vertex. You can choose the normal you want for every vertex.
Polygon normal is changed too if you transform (move) vertices but it's a geometric property of the polygon.

Polygon normal is used in back face culling.
OpenGL for example uses vertex0,1,2 and compute the polygon plane.
If the viewer (in 0,0,0 ) has a negative distance from the plane, the polygon is culled.



Edited by - Andrea on July 11, 2000 1:17:26 PM

Edited by - Andrea on July 11, 2000 1:19:23 PM
IpSeDiXiT
Advertisement
Thanks for the reply Andrea.

I can appreciate the polygon normal is a geometric property, but I''m still a little confused as to how I cull those polygons that will effectively be backfacing from the camera viewpoint when the perspective transformation is apply to the object vertices.

Thanks in advance!
First of all: back face culling can be performed before projection and rasterization.
I dont know if it''s possible to perform it after projection but I think not.
We apply back face culling after transformation that is the multiplication of polygon vertices by a transformation matrix.

Suppose you have a polygon with vertices
v0, v1, v2,...vn
These vertices can be derived from a transformation (translation, rotation, ...)of the original vertices or these vertices can be the same you have edited.

The polygon plane has normal

n = (v1-v0) x (v2-v0) x is the ''cross'' product (vectorial)

and ''d'' coefficient

d = - n * v0 * is the ''dot'' product (scalar)

(so the plane has equation n.x*x + n.y*y + n.z*z + d = 0)

All you need is the SIGN of the distance between this plane and the viewer.
For definition the viewer is in the origin and it''s looking along z axis (if z>0 or z<0 depends on your own conventions).
You know that in 3D if you have, for example, to rotate the ''camera'' you simply rotate the universe in the opposed direction.

distance (P,plane) = (P*n + d)/lenght(n)
where lenght(n) = sqrt(n.x*n.x + n.y*n.y + n.z*n.z)

But you need only the sign and P = (0,0,0)

-> sgn(distance) = sgn(d)

It is simply reduced to

If d>0 -> draw
else -> cull

Where, I remember,
d = - [ (v1-v0) x (v2-v0) ] * v0

Or viceversa but you have only to invert the if statement if you see a wrong polygon that should be culled (or rendered).
It depends by your z convention and by your clock_wise or anti_clock_wise convention.

There is nothing else on back face culling.

If you are exploring a software renderer see also

http://pages.infinit.net/jstlouis/3dbhole/

IpSeDiXiT
Uh, sorry I''m still not entirely there yet...

One of the books I''ve glossed over many times (Advanced Animation and Rendering Techniques, Alan Watt) it suggests that we perform backface culling before the projection - this will reduce the number of polygons that continue through the pipeline. This makes sense - but if we do cull in viewspace, then after the projection to perspective, there will be some polygons that are no longer front facing.

This is my problem - should I do the culling in viewspace before projection (requires a dot product), and then later after projection do another cull, testing the sign of the Z coordinate?

Argh!
Don''t transform your backface normals!!!

Use an inverse matrix transformation on the view.
This way you get the camera position in object space. Now do your backface-culling. No need to do the costly transformation for all your normals!!!

I''ve got a little tutorial on my homepage, you might want to check it out: Backface Culling

Good luck,
- Bas

Advertisement
Cheers baskuenen. Nice article BTW.

Presumably if I use an inverse matrix transformation on the camera, the same goes for the light sources. Just wondering if this will cause problems with directed light.

I still don''t have a workable answer to my original question though! Please...!

Do you agree that after perspective projection there can be some polygons that are backfacing (ones that were not when in viewspace)? If so, how do you deal with them? If not, what the hell am I doing wrong!


This topic is closed to new replies.

Advertisement