Advertisement

Costs of Unreal Nanite?

Started by July 17, 2023 06:35 PM
9 comments, last by JoeJ 1 year, 4 months ago

I have been tlaking with my main artist, who thinks that Unreal 5 and Nanite are like the best thing invented since the wheel, something magic that will let him work less. But I think that nothing comes free (actually, in my country nothing comes even cheap) and Nanite should have some sort of cost. Is it some sort of LOD on the fly? If so, some computaiton power is required to reduce millions of polygons to a mangeable amount. He insists that storage space required to save models isnt that big neither.

What is you opinion about this?

I just do vanilla clustered with vanilla LOD levels, so my take is from skimming around in Nanite while considering adding continuous LOD'ing into my clustering.

Less cost than cons. The only things I'd consider overlooked costs is that cluster LOD selection is super redundant (has to be AFAIK, but I don't think it'd become meaningfully large for most people) and the material classification is an added fullscreen step you have to accept will potentially create heavy costs in G-Buffer fill passes if you go wild with materials.

Streaming is just concealed runtime cost, it's not supposed to be seen, but if you've pegged either side it's possibly going to be seen - and the people that will see it are on the hardware bottom end, so they're going to see it a lot if they see it at all.

Everything is just pros/cons to Visibility Buffer technique and the copy over into the G-Buffer, which for UE's case is probably a win. Costs are probably going to be overtaken by wins against small triangles and overdraw in the vis-buffer. The geometry pump is basically just go BRRRR.

Advertisement

rogerdv said:
Nanite should have some sort of cost. Is it some sort of LOD on the fly? If so, some computaiton power is required to reduce millions of polygons to a mangeable amount.

The costs are spent in offline preprocessing.

After that you have at tree, where parent nodes are larger triangle clusters with less details, and children are smaller clusters with more details.

For the first frame we render, we can traverse the tree once, stopping where an actual cluster fits screen resolution. This has a cost of O(log N).

For the next frame we can do better: We take the set of selected clusters form the previous frame, and for each cluster we test if we want to increase or decrease detail. That's a cost of just O(N).
Notice N represents scene complexity, bit is itself bound by LOD already, so it can't get too large.
At this point the algorithm causes less work than any alternative, including having no LOD at all. So i would say your assumption about high costs is wrong.
That's a very general statement. Ofc. it depends if you use it to achieve very high detail everywhere, or if you use it as an optimization. And to what you compare your results.
But the general advantage is given. Nanite is a big step towards having constant cost of rendering per pixel, instead having variable cost depending on scene complexity.
The latter approach was the games industry standard for way too long, and Nanite is something we want to adopt rather than doubt.

Some more specific points:

Nanite also helps with shadow maps, which render faster / at higher precision.

It adresses LOD advantage only per model (or instance). If you have millions of model in your scene, you still need some other LOD / culling approach on top to deal with large numbers. Nanite can't merge your whole world into a single model to reduce its detail further.

Details such as compute rasterization receive a lot of attention, but are not that important in regard of higher level performance wins as mentioned above. It matters ofc. if you would want to implement your own similar system.

It's not clear to me how's the model quality if we aim for very aggressive reduction. I could not really test this when i've tried it. But at least while streaming in we see the low poly versions have a lot of issues, e.g. texture seams.
It's also unclear how the reducution deals with challenging topology, e.g. small holes closing / opening as we move away or closer. Because Nanite dos not generate new textures or UV layouts for simplified models, quality will be limited.
Because of that, likely Nanite aims primarily for high detail, to keep those issues hidden. If the primary goal was aggressive reduction, e.g. to scale down to low power HW easily, they might need to work harder on those problems.

Finally, it's also unclear to me if UE5 is generally fast or slow. I see hints in both directions. E.g. Fortnite at 60fps on Series S, but announced Silent Hill remake at 30fps on high end PC.
It surely depends on what you do. But i'm certain the real costs are caused much more by Lumen than Nanite.

As JoeJ says, it's paying a cost but it isn't easy to figure it out. It's also a cost for features that many players want, and they've already paid for in terms of sufficient hardware.

It increases complexity at the cost of memory and GPU processing, but chances are good you've got lots of memory available and you must have a high end GPU so it's what the hardware is already doing. Nanite can start with and include more detail, which can take much more space. There are some compute costs and some memory costs at runtime, but alternate effects like LOD or CLOD, view-dependent decimation, they have costs as well.

True that it can keep your GPU busier, that's also what gamers want if it looks better. People don't buy the latest and greatest video card so that the hardware can sit idle, they want it producing the best possible images. It doesn't matter if it engages their premium water cooled system if it looks good in the process. If it can bring the vertex count closer to one vertex per pixel AND stay at a solid frame rate, that's great. It can ALSO adaptively drop back to a graphics card from ten years ago and still look good while staying at a solid frame rate. It can look good while on any of the graphics cards, and it will use whatever graphics card you've got to the max. That's the superpower. No matter what hardware you've got, it will maximize the pretty pictures.

It increases the size of data, but for decently-sized games in 2023 a few extra gigabytes is barely notable. We're not distributing on floppies, CDs, or DVDs. A game distribution going from 15GB to 18GB isn't a big deal these days, especially if those gigabytes are going to something flashy.

So yes, it has a cost. But it's costs the players generally want to pay for features they want to see. If they don't have hardware for it, Unreal still gives them a good visual experience at the cost of storing data those lower-hardware players will never use.

Thanks for your answers! A couple of questions: Nanite can run on low end hardware? From your posts I assume that it is not aimed at such. Do you think that it fits any kind of game? I our case, we are currently working in a 3D isometric game (yeah, same as Baldur's Gate 3). We dont plan to switch from Godot, it is just a theoretical issue.

rogerdv said:
Do you think that it fits any kind of game?

No. If you don't need / can't benefit from continuous LOD, no need for Nanite. It's cost would be a waste.

rogerdv said:
I our case, we are currently working in a 3D isometric game (yeah, same as Baldur's Gate 3).

You'd need to tell more:

BG3, meaning you show zoomed in close ups, e.g. character conversations with detailed facial animation, looking like AAA cutscenes? Or close up combat / other reasons to zoom in and out?

Or an isometric RPG game where zoom level remains close to a constant?

I'd guess it's more of the latter, so you don't need LOD?

Also, what work would your artist intend to spare?
An artist might argue he can model high detail and does not need to care about a low poly game asset.
But if you need only one level of detail, producing it offline requires less resources than shipping many levels of detail and paying the runtime cost.

Advertisement

Is Nanite just a system where the triangles are smaller than the pixels?

taby said:

Is Nanite just a system where the triangles are smaller than the pixels?

It's the product name for a view-dependent Continuous Level Of Detail (CLOD) system.

It needs high enough resolution models that it makes sense, and the models themselves must be at a detail level where it matters. The various high-resolution mesh details are streamed in, so they must be on an SSD, plus it needs a graphics card with enough memory and processing cycles.

Being “smaller than pixels” depends entirely on your vantage point. If the view is something in the distance it can be sub-pixel, if the view is up close it can be quite big. That's a situation view-dependent CLOD systems try to resolve.

The goal is that when it makes sense to have higher resolution meshes to use them, such as when you're about to be eaten by the monster you want high resolution meshes plus high resolution textures of the monster's teeth since that's all you can see, and you can still potentially have very large triangles and pixelated texture closeups. Yet you also want that if you're looking at the same monster on a distant hilltop you only need a low resolution mesh and low detail texture which still gives sub-pixel detail. The goal is that considering your viewpoint you want enough mesh density that you can see distinct corners and smooth curves, and yet also not so much detail that you're overdrawing or wasting the processing on invisible details.

The tech itself isn't that new, I worked on similar systems for terrains over 20 years ago. The big difference is that with with tremendous GPU power and rapid data streaming, coupled with some fancy data structures, we can keep up on graphics cards without an overwhelming performance cost. 20 years ago it was all a computer could to to keep up with view-dependent terrain meshes on a flight simulator. On PS2 / XBox era hardware you had 4MB of graphics memory, and either 32MB or 64MB of total system RAM. Today we can throw around models that are combined 100+ megabytes, with multiple 4K textures (a 4k bc3/bc5 texture is 21 megabytes), with a 4K texture for color, another for albedo, another for roughness, another for bump, coupled with mesh resolutions that would have been irresponsible to use in games just a decade ago.

The level of detail system has several layers of fallback involved, so if you're on hardware that can't keep up it will still have the older style of LOD system, it will just leave the higher resolution data on disk unused. You pay a processing cost up front to generate the LOD hierarchy, and there's a small cost as it's used, but it's not so different from other costs we pay all the time in development.

@JoeJ

We have zoom, but never too close to characters:

As we are using Godot, fancy cinematics with close-ups are out of the question. What the artist wants is to save his work time, of course. For our game, we use MakeHuman to create a base model with rig, and then modify it to remove a bunch of polygons (MH generates a 30k poly mesh, and the low poly topology is a very low quality 4k poly mesh). With Nanite, he thinks he can sculpt and trow the result directly to project and everybody will have a happy ending.

rogerdv said:
We have zoom, but never too close to characters:

Looks nice. But as expected, imo. you don't need LOD for geometry at all. Mip mapped textures as usual already cover your needs.

You also don't need occlusion culling. Rendering the grassy ground below a building will be faster than trying to figure out what's occluded.
Because the Nanite renderer implements occlusion culling, you were right: In your case, Nanite only adds cost for no benefit.

I mean, i guess one can make isometric / topdown games with UE just fine, but there is no point to consider switching engines just for Nanite.

rogerdv said:
With Nanite, he thinks he can sculpt and trow the result directly to project and everybody will have a happy ending.

Either your artist missed the fact that Nanite does not support characters, or they lifted this restriction and i've missed the breaking news.

But i guess Nanite still is mostly meant for the static world. E.g. for the Matrix demo, cars are Nanite models, which works because they are rigid. But if a car takes damage from a crash, they replace the Nanite model with a traditional mesh, and apply the damage deformation to that.
Adding to such example that with Nanite we might be expected to create more detailed content, i would conclude using Nainite does not really save you work.

Also, for characters, even if they would work with Nanite, one might still prefer to do a low poly retopo manually, simply because automated Nanite reduction might not be the highes quility we expect for characters.
So your artist has no reason to look enviously at UE, imo.
If he's interested in automated workflows or reduction, there are tools such as Simplygon which can do this too and even better. But i can't recommend anything specific.

This topic is closed to new replies.

Advertisement