Advertisement

Unreal Engine 5 Demo - Rendering Claims

Started by May 13, 2020 04:22 PM
32 comments, last by JoeJ 4 years, 8 months ago

The final result does look to me like 33mln tris. It shows 33mln tris. How these 33mln tris got there, is uncertain.

When i was starting with 3D, it was cheaper for geometry with many small details to use triangle color data or vertex color data that interpolates, instead of using a texture. For many small details it looks the same as using a texture. At least inside a CPU it was pretty faster than using textures and looked the same.

Once the geometry data is loaded close to the GPU, it is not such a big deal. My cheap 5 years old laptop renders 1mln triangles on the integrated Intel GPU inside the browser.

And GI is pretty coarse, i think.

Here the elbow occludes nothing. the same amount of light reaches the wall, regardless the position of the elbow. This could be because of lag or because of coarse detail, but it is not realistic.
 


Something like that. As a visual effect, not as a technique, AO is part of GI. The elbow should prevent many samples from reaching the wall.

I know many people do not understand me when i talk about AO not as a technique, but as a visual effect, so here i explain how Occlusion happens in GI →
 

Frantic PonE said:

Does make me wonder about indirect tracing costs though, the single lightsource wasn't really “area light” and as I said, the water reflection is screenspace traced. How utterly difficult and costly might that pipeline be to raytrace? Which makes me wonder how, if at all, highly reflective surfaces could be handled with this, the demo was very, very, very diffuse in general.

I though the same thing. Don't get me wrong, I was amazed by the demo, but the lack of RT left me quite a bit disappointed, I was really hoping they would have shown some crazy RT demo. That scene, as amazing as it is, was surely been chosen on pourpose, to highlight geometric details and GI, and to kind of “hide” potential weaknesses of their renderer (those easily solveable by using a path tracer). In fact, that water looks like it was coming from a game made 10 years ago…

Still impressive, but I was hoping in something… “different” (cough, RT, cough)

Advertisement

There is this speculation or rumer they use SVO for geometry, e.g.:

Don't know where it comes from; did read it on beyond3D forum.

If true, it would make sense to use it for GI as well. They could have approximate material in the node bounds. Tracing quite different from SDF / voxel ideas. But DDGI alike probes is still an option.

Or they use the SVO nodes also to cache irradiance and sample from this, which brings me back to my first post specualtions. Also their algorithm would be very similar to my own work then.

If they use SVO, i assume they support multiple tree so they can be transformed. Rebuilding this to have openening ceiling sounds not practical.

Noticed some screen space shadows in the video. But no indication of SSAO. This puzzled me on Hellblade II already.

Demystified a lot:

So its VXGI with SDF objects and screenspace to increase accuracy.

I don't know much about mcro polygon rendering, but maybe it is not necessary to merge multiple triangles. Instead keep just the single triangle that intersects the center of current pixel. TAA would filter out the noise.

Single pixel triangles make no sense at all. Pretty sure they use pixel splats like in Dreams. And that's also where those Geometry SVO rumors come from. The Many LoDs paper already did beat the rasterizer with this technique for high poly models.

Wahoo, nailed ⅔ of the GI! Inherently not a great technique unless you control the scene. The light just isn't going to bounce around enough unless you have regular openings to distant skylights and strong light source you inject in pouring in. You can see how quickly it rushes to absolute black in those open caves as well as in the temple. One wonders how the trace distance, or rather teapot in a stadium problem is handled as well.

Still, for the goal of as realtime as possible tradeoffs are going to need to be made, making it overall useable in the right sorts of game scenarios. One also wonders what the light injection costs are going to be though, and thus just how useable it actually is.

Advertisement

JoeJ said:
So its VXGI with SDF objects and screenspace to increase accuracy.

I'm no longer sure about this.

I'm convinced they use a hierarchy (e.g. SVO) to store geometry: Leaf nodes store triangles, internal nodes store points for splatting.
Streaming then requires only to load top levels for distant objects. It all makes sense.

But if they have this, then why use crappy VXGI with poor voxelization and tracing performance. Why not something like Many LoDs or imperfect SM?

Eventually they do. And they store probes in a regular grid (So DF video is still correct) because they lack surface parametrization, and with no support for dynamic stuff like characters they need volume lighting anyways.

On the other hand, can SVO help with faster voxelization? Probably yes, but GI geometry would remain quantized, which i did not noticed to happen in the video. Though the openening ceiling seems the only really dynamic object at all, and temporal filter could hide a blocky movement of that.

For anyone who would be interested about the multi-bounce in GI, I'd recommend this one: https://enlisted.net/en/news/show/25-gdc-talk-scalable-real-time-ray-traced-global-illumination-for-large-scenes-en/#!/

I'm on it, no additional runtime voxelization (except first frame in a new scene), no pre-baked surfel or irradiance volume, not involved in some complex noob-unfriendly mathematical representations, hardware RT ready.

JoeJ said:

JoeJ said:
So its VXGI with SDF objects and screenspace to increase accuracy.

I'm no longer sure about this.

I'm convinced they use a hierarchy (e.g. SVO) to store geometry: Leaf nodes store triangles, internal nodes store points for splatting.
Streaming then requires only to load top levels for distant objects. It all makes sense.

But if they have this, then why use crappy VXGI with poor voxelization and tracing performance. Why not something like Many LoDs or imperfect SM?

Eventually they do. And they store probes in a regular grid (So DF video is still correct) because they lack surface parametrization, and with no support for dynamic stuff like characters they need volume lighting anyways.

On the other hand, can SVO help with faster voxelization? Probably yes, but GI geometry would remain quantized, which i did not noticed to happen in the video. Though the openening ceiling seems the only really dynamic object at all, and temporal filter could hide a blocky movement of that.

SVO performance is from all my tests inferior in terms of performance to using directly 3D texture or cascaded VXGI the quality gain isn't worth it unless your scene keeps this heavily in mind. Cone tracing SVO and SVO building is just way more expensive compared to simple stupid 3D texture.

Now from here:

https://www.eurogamer.net/articles/digitalfoundry-2020-unreal-engine-5-playstation-5-tech-demo-analysis

"Lumen uses ray tracing to solve indirect lighting, but not triangle ray tracing," explains Daniel Wright, technical director of graphics at Epic. "Lumen traces rays against a scene representation consisting of signed distance fields, voxels and height fields. As a result, it requires no special ray tracing hardware."

To achieve fully dynamic real-time GI, Lumen has a specific hierarchy. "Lumen uses a combination of different techniques to efficiently trace rays," continues Wright. "Screen-space traces handle tiny details, mesh signed distance field traces handle medium-scale light transfer and voxel traces handle large scale light transfer."

Lumen uses a combination of techniques then: to cover bounce lighting from larger objects and surfaces, it does not trace triangles, but uses voxels instead, which are boxy representations of the scene's geometry. For medium-sized objects Lumen then traces against signed distance fields which are best described as another slightly simplified version of the scene geometry. And finally, the smallest details in the scene are traced in screen-space, much like the screen-space global illumination we saw demoed in Gears of War 5 on Xbox Series X. By utilising varying levels of detail for object size and utilising screen-space information for the most complex smaller detail, Lumen saves on GPU time when compared to hardware triangle ray tracing.

Thinking about it - it might be sort of similar to cone tracing. If you remember how cone tracing looks like (quick naive example from one of mine HLSL files - https://pastebin.com/k1kaGa0n​​ ), they might be doing exactly this, but for first few steps (before your cone radius gets big enough to contain size of your voxels) use SSGI information and SDF hierarchy. This might give them higher precision in small details.

I'll need to see demo to play with it. But the GI solution just doesn't look that good and robust to me (I might be wrong, although … you can't beat unbiased path tracing, no matter how hard you try).

Whole video running at 30 fps doesn't convince me either (60 fps is a minimum these days, especially with VR on the market - where you need more than that).

Don't get me wrong, the demo is nice - but it doesn't seem to be some major breakthrough. Also, my 50c, engine is mainly about tools and they haven't showed them yet.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

Vilem Otte said:
SVO performance is from all my tests inferior in terms of performance

What i mean is they could use their SVO (or whatever) to increase voxelization perf. E.g. Workgroup per block of 3D volume is traversing SVOs and voxelizes them, instead using rasterization to 3D texture.

Vilem Otte said:
Thinking about it - it might be sort of similar to cone tracing. If you remember how cone tracing looks like (quick naive example from one of mine HLSL files - https://pastebin.com/k1kaGa0n​​ ), they might be doing exactly this, but for first few steps (before your cone radius gets big enough to contain size of your voxels) use SSGI information and SDF hierarchy. This might give them higher precision in small details.

Yeah, most probably.

What confuses me a bit is the low angular resolution of specular. Using traditional VXGI you get much more details and it's still fast.
Whet they have looks very similar to what i get from my 4x4 envmaps per surfel. So i assume they do not trace per pixel, or only a short distance to increase details.
Probably they store some basis per voxel, similar to DDGI. But updating a volume of such probes sounds pretty expensive if not using a diffusion / propagation approach.

What do you think? Would it make sense to you to restrict your voxel specular to be blurry for any reason?
Probably it is quite different from traditional VXGI, i guess.

Vilem Otte said:
Don't get me wrong, the demo is nice - but it doesn't seem to be some major breakthrough. Also, my 50c, engine is mainly about tools and they haven't showed them yet.

Well, they achieve a lot of things i work on for years, and i think their tools are much faster. I have heavy preprocessing costs and could not claim to import high poly model and display immideastely.
I may be on wrong track with remeshing, intersection removal, seamless UVs with support for displacement, optional texture space shading, etc. I still work on this - it's hard.
What they have seems much simpler and it works. Though i see disadvantages:

Such high detail is a problem. Inconsistent amount of details expected. Can they have this on everything? The character certainly is not detailed enough for the environment. And what if no Quixel model is available for your game?
Extreme repetition expected. This time for geometry as well, not only texture. Fast streaming does not help with storage limits.
If it's 30 fps it is damn expensive.
How about RT? To support this, they need triangles from lower LODs. Point splat data would not do. (RT made me giving up for most on splatting ideas.)
Still static limitations expected, e.g. from using SDF per model for lighting.
Finally, hiding LOD switches behind screen resolution is still total brute force. (But damn - it works!)

But we can always come up with some doubts. They achieved a big leap, and it really shakes things up.

This topic is closed to new replies.

Advertisement