Advertisement

Unreal Engine 5 Demo - Rendering Claims

Started by May 13, 2020 04:22 PM
32 comments, last by JoeJ 4 years, 8 months ago

hmmm… just culling / merging the triangles that are smaller than a pixel and done?

I suppose you are right. If you aren't using any normal maps and just vertex normals , always having verts render less than a couple pixels apart so that interpolation isn't 1995 per-vertex lighting across lots of pixels.
However deciding what needs to be processed in a given frame could be complicated. In order to do merging a compute shader seems reasonable. You would need to know edge info and other things. I'm curious to really see how a really long rock asset works. It looks like it would have the same density across it in screen-space, so updating that mesh would take some time, unless they process the mesh in chunks no bigger than 2x2x2 or so. If a long asset would be processed every frame then that is pretty interesting.

However I'm still confused how 33 Million verts for one model: x,y,z,Nx,Ny,Nz → 792MB of data.

NBA2K, Madden, Maneater, Killing Floor, Sims

hmmm… just culling / merging the triangles that are smaller than a pixel and done?

But also, not just culling smaller than a pixel, your mesh cant have big triangles either. So you need very subdivided meshes so that triangles will be very small. That's the key.

NBA2K, Madden, Maneater, Killing Floor, Sims

Advertisement

Green_Baron said:
I doubt the lighting calculations are done completely on the geometry end of the pipeline

Agree, assuming their hierarchical data structure contains simplified representations.

dpadam450 said:
You would need to know edge info and other things.

If merged triangles are sub pixel, no need for connectivity on edges.
Who knows… maybe they just splat a single pixel like in dreams : )

But reducing a 30M model so distant it covers only 8x8 pixels? No. They must store LODs. And generating them also is a lot faster if connectivity is not required.

dpadam450 said:
However I'm still confused how 33 Million verts for one model: x,y,z,Nx,Ny,Nz → 792MB of data.

I expected this: UE will use heavy instancing and showing the same Quixel rock thousands of times. And exactly that happened : )

Storage amount is the real limitation, not streaming speed. It means many games will not reach this detail if their scenes do not work with instancing. For nature it seems acceptable, but city scene, interiors? Quixel won't help here.

dpadam450 said:
But also, not just culling smaller than a pixel, your mesh cant have big triangles either. So you need very subdivided meshes so that triangles will be very small. That's the key.

Not sure. Maybe GI requires ‘small’ triangles because it treats them as a sphere or surfel, but for rendering they should be fine? (Small for GI could mean 0.1 meter, which was my current gen target for surfel GI)

Large triangles would help with storage at least.

It all sounds and looks great, but I'm not 100% convinced on the workflow changes.

In terms of animation and such, you're still going to have to do some manual work to get topology correct. It isn't as simple as loading a decimated version or zremesher of the high poly to rig and you're good to go due to topology. Nobody is loading in 30 million poly meshes for rigging either.

As normal maps wont be “required”, what about texturing for other maps? I couldn't imagine UV Unwrapping a high poly mesh. Unless everyone is using seamless textures now and just applies them to meshes, or paints directly in the engine by vertex masking?

Depending on how this all develops, I might dive back into Unreal regardless of my distaste for Blueprints, and a few other things. I wonder how Unity intends to respond? Even Unreal offering no royalties until your first $1million is a big game changer.

Interesting stuff for sure!

Programmer and 3D Artist

Eh, the lighting is, what? Hard shadows, an iterative bounce local probe with some spherical basis (guassian, ambient dice? whatever) you derive specular from with light injected from a reflective shadowmap and distant skybox? Something very close, you can see the “lag” from the light moving when they moved the sun around once, even the caverns are too blue for much distance to be taken into account in local bounce lighting, and it's not high res, the water is screenspace traced. For details Epics screenspace traced GI is already surprisingly stable looking, could cover smaller details handily.

The geometry is very cool. That level of detail is achievable with LEADR mapping, which was demoed in realtime 7 years ago let alone today. But the lack of LODs and etc. sound great. Does make me wonder about indirect tracing costs though, the single lightsource wasn't really “area light” and as I said, the water reflection is screenspace traced. How utterly difficult and costly might that pipeline be to raytrace? Which makes me wonder how, if at all, highly reflective surfaces could be handled with this, the demo was very, very, very diffuse in general.

That being said, it is good to see the geometry pipeline being rethought, and splatting might be going on somehow. Random users can create some absolutely insane detail in Dreams, like well beyond Triple A titles in detail just using their point splatting stuff, it's a bit crazy to be honest.

Advertisement

I was fooled a lot by the details. GI is lower resolution than i thought. Small scale details don't interact anywhere, but not sure if they still use SSAO.

Could be anything. When i personally experimented with voxel GI i used SH to represent voxel occlusion and material. This gave nice prefiltered results and no quantization was visible.
So i would not rule out a VXGI variant either.

Frantic PonE said:
you derive specular from with light injected from a reflective shadowmap and distant skybox?

Doubt it. reflective SM would lack too much information. I guess they trace the whole environment per probe and store enough angular information to support diffuse, normal mapping and some specular in one go.
Main reason for lag should be stochastic update of probes, hidden by exp. moving average.

JoeJ said:

I was fooled a lot by the details. GI is lower resolution than i thought. Small scale details don't interact anywhere, but not sure if they still use SSAO.

Could be anything. When i personally experimented with voxel GI i used SH to represent voxel occlusion and material. This gave nice prefiltered results and no quantization was visible.
So i would not rule out a VXGI variant either.

Frantic PonE said:
you derive specular from with light injected from a reflective shadowmap and distant skybox?

Doubt it. reflective SM would lack too much information. I guess they trace the whole environment per probe and store enough angular information to support diffuse, normal mapping and some specular in one go.
Main reason for lag should be stochastic update of probes, hidden by exp. moving average.

Huh, so just Nvidia's RTX GI with the probes encoded in some spherical basis? That would make sense. One supposes they could make use of bent normal signed distance field occlusion for smaller details like the character indirect shadow and etc.

They talk about “streaming” the geometry. From where? Hard drives don't have the transfer rate.

They have 8 CPUs at 3.2GHz each, and 16 teraflops in the GPU of the PS5, along with 24GB of memory. That's a lot of crunch power.

Frantic PonE said:
Huh, so just Nvidia's RTX GI with the probes encoded in some spherical basis? That would make sense. One supposes they could make use of bent normal signed distance field occlusion for smaller details like the character indirect shadow and etc.

Yeah. Radiance caching in probes is the simplest way to get infinite bounces. Doing in on some form of geometry is much more complicated.
Now the question: How do they represent geometry occlusion for compute tracing? If they would use visible meshes, why not using HW RT if PS5 already has it?

SDF for everything? Like they already had for shadows? Quite likely. The openeing ceiling is rigid objects, so they could just transform their SDF volumes with them.

Final question is then how do they represent geometry material. Maybe voxel volumes.

And they could stream all this data, so no need to constantly voxelize and generate SDF.

Nagle said:
They talk about “streaming” the geometry. From where? Hard drives don't have the transfer rate. They have 8 CPUs at 3.2GHz each, and 16 teraflops in the GPU of the PS5, along with 24GB of memory. That's a lot of crunch power.

PS5 is 10 TF and 16 GB.

And while it's SSD is very fast, the quick streaming at the end is another indication they must use precalculated LODs.

This topic is closed to new replies.

Advertisement