Nagle said:
What are those? Something you get by post-processing a point cloud?
Yes.
Nagle said:
Then everything looks like Minecraft.
Depends on resolution and filtering. (But i'm not a fan of voxels or other volumetric representations such as SDF - takes too much memory)
Nagle said:
Everything looks like a blob.
Looks much more like a 3D real world photgraph to me. But ofc. it becomes blurry if you zoom in too much. Still, blurry is better than exposing flat triangles and texture seams.
Nagle said:
Yes, there are other representations. Constructive solid geometry, like CAD systems (SolidWorks, FreeCAD, Fusion 360, and I think SketchUp) are great for buildings and manufactured objects, but poor for avatars and clothing.
Regarding rendering, the real problems with parametric surfaces is: Complexity much higher than triangle meshes, so reducing it below the base mesh level becomes way too hard and unpractical. I have never seen a related research papers.
So it can help with compression, but general LOD is out of reach.
Nagle said:
A good first step is a reliable way of turning surface meshes into valid watertight manifolds that represent volumes.
I had to solve this probelm to generate my GI surfel hierarchy. This is what i do:
Convert the mesh to volume data using the winding number method, which is robust even if the mesh has holes, missing back sides, etc.
Generate isosurface from the volume (kind of marching cubes).
Do quad dominant remeshing to get higher quality result with curvature aligned edge flow.
It's good for my needs and also generates LODs, and it can do organic geometry well. But for technical models it becomes pretty imperfect.
Just mentioning it to point out that any mesh can be made a watertight manifold, but the results might be not good enough for your needs.
I guess your presumed situation is: You want an efficent renderer for user made content meshes which can be anything and have no LODs. So you want to generate those LODs automatically. You try to find good objecteves for such process, e.g. preserving silhouettes.
But there is a flaw here: It's the expectation that the automated result still has the same quality as the input mesh, relative to it's current level of detail. But this is not possible, since your algorithm can't have the same skill as the human artist. Detecting edges etc. is not good enough to match the artist, who optimizes for fitting shapes with a low number of triangles using human perception and intelligence.
So your results will be always of lower quality, which has to be accepted. Or you wait for generative AI tools for the task, which i guess won't take too long from now.
Imo, automated mesh reduction algorithms can't do very low poly models, for now only humans can.
Nagle said:
I know about Simplygon, Unity's thing, UE5's system, quadric mesh reduction as done in Blender, impostor generation, and approximate convex hull decomposition. The problem is, if you want to hammer distant objects down to, say, 50-100 tris, those algorithms tend to not do well.
Why is impostors on that list? It's less than 50-100 tris?
Makes me think about Google Seurat again. Sadly i do not well understand how it works, but it seems proof that impostors can work for the general case. Ever tried something like that?