rogerdv said:
Nanite should have some sort of cost. Is it some sort of LOD on the fly? If so, some computaiton power is required to reduce millions of polygons to a mangeable amount.
The costs are spent in offline preprocessing.
After that you have at tree, where parent nodes are larger triangle clusters with less details, and children are smaller clusters with more details.
For the first frame we render, we can traverse the tree once, stopping where an actual cluster fits screen resolution. This has a cost of O(log N).
For the next frame we can do better: We take the set of selected clusters form the previous frame, and for each cluster we test if we want to increase or decrease detail. That's a cost of just O(N).
Notice N represents scene complexity, bit is itself bound by LOD already, so it can't get too large.
At this point the algorithm causes less work than any alternative, including having no LOD at all. So i would say your assumption about high costs is wrong.
That's a very general statement. Ofc. it depends if you use it to achieve very high detail everywhere, or if you use it as an optimization. And to what you compare your results.
But the general advantage is given. Nanite is a big step towards having constant cost of rendering per pixel, instead having variable cost depending on scene complexity.
The latter approach was the games industry standard for way too long, and Nanite is something we want to adopt rather than doubt.
Some more specific points:
Nanite also helps with shadow maps, which render faster / at higher precision.
It adresses LOD advantage only per model (or instance). If you have millions of model in your scene, you still need some other LOD / culling approach on top to deal with large numbers. Nanite can't merge your whole world into a single model to reduce its detail further.
Details such as compute rasterization receive a lot of attention, but are not that important in regard of higher level performance wins as mentioned above. It matters ofc. if you would want to implement your own similar system.
It's not clear to me how's the model quality if we aim for very aggressive reduction. I could not really test this when i've tried it. But at least while streaming in we see the low poly versions have a lot of issues, e.g. texture seams.
It's also unclear how the reducution deals with challenging topology, e.g. small holes closing / opening as we move away or closer. Because Nanite dos not generate new textures or UV layouts for simplified models, quality will be limited.
Because of that, likely Nanite aims primarily for high detail, to keep those issues hidden. If the primary goal was aggressive reduction, e.g. to scale down to low power HW easily, they might need to work harder on those problems.
Finally, it's also unclear to me if UE5 is generally fast or slow. I see hints in both directions. E.g. Fortnite at 60fps on Series S, but announced Silent Hill remake at 30fps on high end PC.
It surely depends on what you do. But i'm certain the real costs are caused much more by Lumen than Nanite.