Advertisement

Recent progress in LOD generation?

Started by January 27, 2025 04:40 AM
6 comments, last by JoeJ 1 day, 16 hours ago

Any recent progress in low-poly LOD generation?

I know about Simplygon, Unity's thing, UE5's system, quadric mesh reduction as done in Blender, impostor generation, and approximate convex hull decomposition.

The problem is, if you want to hammer distant objects down to, say, 50-100 tris, those algorithms tend to not do well. They're much better for 100,000 tris to 10,000 tris. If you push them hard, they tend to come unglued.

Looking for open source code.

The basic problems are:

  • Quadric mesh simplification tries to minimize the volume difference between the original and reduced models. That's not visually optimal for LODs. For example, cloth with a wrinkle in it won't become flat. The edges will be pulled inward or holes will appear. Unreal Engine at least has silhouette protection, which keeps the edges roughly the same. Most others, including Blender, lack that. Anyone know an open-source quadric mesh simplifier with UE4/5 type silhouette protection?
  • Approximate convex hull decomposition (https://github.com/SarahWeiii/CoACD) can do a nice job on buildings. But the input mesh has to be a watertight manifold or the thing blows up. I tried. This is a common problem with academic code.
  • What you really want is for LOD generation to generate new textures with normals to represent surface detail. Does anything actually do that?

Obviously all problems you have mentioned are rooted in using meshes as representation of geometry.
Using voxels for example, LOD becomes trivial. It's also much easier when using point hierarchies, or spherical gaussians, etc.
I feel we're very close to the point where triangles become a dead end.

Advertisement

JoeJ said:

Obviously all problems you have mentioned are rooted in using meshes as representation of geometry.

Voxels

Then everything looks like Minecraft.

Point hierarchies.

What are those? Something you get by post-processing a point cloud?

Spherical gaussians

Everything looks like a blob.

Yes, there are other representations. Constructive solid geometry, like CAD systems (SolidWorks, FreeCAD, Fusion 360, and I think SketchUp) are great for buildings and manufactured objects, but poor for avatars and clothing. Parametric primitives (cubes, spheres, ellipsoids, etc.) can be taken a long way, as Archimatrix does. But those are not as general as meshes. Most artist-generated content is meshes. None really help much with this problem.

A good first step is a reliable way of turning surface meshes into valid watertight manifolds that represent volumes. Most of the algorithms for that tend to blow up or distort some geometry that will render just fine.

Nagle said:
What are those? Something you get by post-processing a point cloud?

Yes.

Nagle said:
Then everything looks like Minecraft.

Depends on resolution and filtering. (But i'm not a fan of voxels or other volumetric representations such as SDF - takes too much memory)

Nagle said:
Everything looks like a blob.

Looks much more like a 3D real world photgraph to me. But ofc. it becomes blurry if you zoom in too much. Still, blurry is better than exposing flat triangles and texture seams.

Nagle said:
Yes, there are other representations. Constructive solid geometry, like CAD systems (SolidWorks, FreeCAD, Fusion 360, and I think SketchUp) are great for buildings and manufactured objects, but poor for avatars and clothing.

Regarding rendering, the real problems with parametric surfaces is: Complexity much higher than triangle meshes, so reducing it below the base mesh level becomes way too hard and unpractical. I have never seen a related research papers.
So it can help with compression, but general LOD is out of reach.

Nagle said:
A good first step is a reliable way of turning surface meshes into valid watertight manifolds that represent volumes.

I had to solve this probelm to generate my GI surfel hierarchy. This is what i do:
Convert the mesh to volume data using the winding number method, which is robust even if the mesh has holes, missing back sides, etc.
Generate isosurface from the volume (kind of marching cubes).
Do quad dominant remeshing to get higher quality result with curvature aligned edge flow.

It's good for my needs and also generates LODs, and it can do organic geometry well. But for technical models it becomes pretty imperfect.
Just mentioning it to point out that any mesh can be made a watertight manifold, but the results might be not good enough for your needs.

I guess your presumed situation is: You want an efficent renderer for user made content meshes which can be anything and have no LODs. So you want to generate those LODs automatically. You try to find good objecteves for such process, e.g. preserving silhouettes.
But there is a flaw here: It's the expectation that the automated result still has the same quality as the input mesh, relative to it's current level of detail. But this is not possible, since your algorithm can't have the same skill as the human artist. Detecting edges etc. is not good enough to match the artist, who optimizes for fitting shapes with a low number of triangles using human perception and intelligence.
So your results will be always of lower quality, which has to be accepted. Or you wait for generative AI tools for the task, which i guess won't take too long from now.
Imo, automated mesh reduction algorithms can't do very low poly models, for now only humans can.

Nagle said:
I know about Simplygon, Unity's thing, UE5's system, quadric mesh reduction as done in Blender, impostor generation, and approximate convex hull decomposition. The problem is, if you want to hammer distant objects down to, say, 50-100 tris, those algorithms tend to not do well.

Why is impostors on that list? It's less than 50-100 tris?

Makes me think about Google Seurat again. Sadly i do not well understand how it works, but it seems proof that impostors can work for the general case. Ever tried something like that?

Nagle said:
Looking for open source code.

Consider also looking at research papers, especially if you're wanting “recent progress”.

Hughes Hoppe has been pushing the field for over 30 years and is still getting papers published as co-author and advisor. You can read many of the papers on his website. Just about all research is going to either use his works, cite his works, or both, so his name works as a convenient search term.

Most modern mesh reduction done well requires re-evaluation of the model into more complex geometric shapes and parametric surfaces followed by reconstruction of the model into minimalist forms that represent the shapes efficiently. Simple mesh reduction by removing polygons and reducing T-junction issues have been solved problems since the late 1990s and very early 2000s, when “provably always consistent" and "provably most efficient” forms were getting hit. They hit their limits decades ago, which is why newer methods work first to generate the conceptional infinite-precision parametric version, followed by identifying the key points, then re-applying the 30-year-old numerically proven fundamentals to generate optimal representations.

https://hhoppe.com/proj/pvdrpm/

Is that where Nanite came from?

Advertisement

Nagle said:
Is that where Nanite came from?

No. Hoppes work mostly operates on individual triangles, either by collapsing edges progressively as shown in quoted paper, or even in continuos ways where vertices move gradually towards a collapsed state.
Allthough this had initial applications, e.g. for terrain or even characters (e.g. the Messiah game), it never became a true solution. Almost everybody used discrete LODs instead.
Reasons are:
* Processing individual mesh elements disagrees with how GPUs work, and doing it on CPU is too costly.
* Collapsing individual surface elements by doing legal operations can not reduce topolgy itself, so continuous reduction can only go so far.
* Our meshes are not just a connected surface in 3D, they are also have disconnected charts of surface in 2D texture space. This complicates copolgy and we hit the limit earlier.

Nanite solves the first and most important point. It works with clusters instead individual elments, and opposed to earlier such methods, it also avoids a need for dynamic stitching of cracks on cluster boundaries. (I saw people referring to older papers proposing the method before Nanite.)
The clustering method is independent of the reduction method. Afaik they use quadric error metrics, but they could switch to something else if desired.
Mesh quality is worse than in Hoppes works. Nanite needs to preserve boundary sections at a higher LOD than desired to avoid the cracks. This can be resolved with the next reduction step upwards the hierarchy, but we are still forced to mix at least 2 levels of detail per cluster, and those technical constrains dominate over objectives of a high quality reduction.

So yeah, as you're mainly interested in very low poly representation, Hoppes Stuff might be good to look up.
As for reduction, quadric error metrics became the most widely used afaict, but his works surely show many alternatives.
But as said - 50-100 triangles - that's hard for a human artist already.

Advertisement