Here is the approach I am currently using:
Each model has a bounding box that is oriented with the model (OBB). The OBB is generated when the model is loaded and the points of the OBB are transformed when an intersect test needs to be performed (with logic to optimize not regenerating multiple times in a frame or not regenerating when the object position/transformation hasn't changed, etc.).
When it comes time to do the intersect test:
- First, before transforming the OBB, I do a quick check to make sure the ray isn't headed in completely the opposite direction. If so, exit, no intersection.
- Next the OBB is transformed if necessary.
- An intersect test is performed on the sides of the OBB. If no intersection of the OBB, exit, no intersection.
- If the OBB is intersected, then I loop through each face of the model:
- Calculate a normal for the face.
- If the face is pointing away from the vector, proceed to the next face.
- If not, perform a Moller-Trumbore intersection test of the triangle - return TRUE if intersection found or proceed to next face if not.
What could I do to optimize this, if anything, and still get similar results? Looping through each face is painful, but my models are fairly low polygon count so it's not that bad. One capability I am adding is the ability to manually specify the OBB and have multiple OBBs for an object so they more closely follow its shape. Another thought is to simply return TRUE when the OBB is intersect when the object is far away without going through each face for a more precise detection.