jon1 said:
In the metaverse, though, there aren't just 30 different vehicle meshes to choose from; everyone wants to make and sell their own mesh, and on a typical road, almost every single mesh will be custom.
Exactly. Which is what that video above is about. That's displaying content which is un-optimized, was created independently by several hundred different people, and has almost no instancing. Frame rate is about 50-60 FPS.
Operating with that kind of content hasn't been addressed much. Game creation tries to avoid it, and to optimize as much as possible during the level-building process. In a big virtual world with user-created content, we don't have that luxury.
If you don't solve this problem, your world has to be dumbed down to cartoon level, like Facebook Horizon, or whatever they're calling it this week. Or limited to small disconnected rooms or areas.
What I'm doing is roughly this:
I'm using Rust→Rend3→WGPU→Vulkan. This stack lets you change the render state in the GPU from multiple threads while rendering. So one thread is just endlessly redisplaying the scene, maintaining a relatively constant frame rate, while other threads asynchronously make changes.
As the camera viewpoint moves, once a second, the desired resolution and loading priority of each textured face is recomputed. The goal is one texel per screen pixel. If there's a resolution or priority change, a request for the new resolution of the texture goes on a to-do queue at the computed priority. There are multiple threads taking requests off the queue in priority order, fetching textures from the cache or network, making resolution changes, and loading them into the GPU. Doing this keeps several CPUs busy, mostly decompressing images coming in from the remote asset servers.
The priority queue has to support dynamically changing the priorities of already-queued requests. Otherwise, when the viewpoint moves, you're stuck working through a queue of stuff that's now less important.
What you see in that video above looks like you're viewing static preloaded content. In fact, the system is working frantically behind the scenes trying to get the right stuff in VRAM before the viewpoint gets close enough to blow the illusion. We must run very hard to seem to stay in the same place.
This is to mip-mapping as paging to disk is to having more memory. If we had 30-40GB of VRAM, instead of 4 to 6, we'd just dump in all the mip-mapped textures.
So, this is a way to work with metaverse-type content at scale, with the detail and frame rate expected in games today. There have been people saying that is impossible. It's not.