Advertisement

Unreal Engine 5 Demo - Rendering Claims

Started by May 13, 2020 04:22 PM
32 comments, last by JoeJ 4 years, 8 months ago

Watch the video. Listen to the claims.

The demo is awesome. We know real-world scans are the future and tessellation was a big deal. His claim is that in the demo you see a 33 Million triangle mesh (396MB of position data) that was imported directly. I can't see how that make any sense just as a memory footprint. He says there is no LOD system and no normal map baking. So I don't know how you could LOD. You obviously aren't calling draw 33 million triangles when the thing is 50 feet away.

You can't really unwrap a 33 million triangle mesh well. Many islands would be present and you need padding between islands for mip-mapping purposes. So there has to be a smaller model to work with that has way less vertices, otherwise you have 33 million x 2 floats in UV data as well. So 2 claims I find hard to believe. If he says its just mass amounts of tessellation, then we are on the same page.

We know that triangles less than 2x2 size cause problems, so if this isn't a version of tessellation then hmm. I didn't watch closely for popping LOD so tessellation would be the culprit unless some new technology concepts are being used here.

Thoughts?

NBA2K, Madden, Maneater, Killing Floor, Sims

This caught my eye as well in the presentation. My current working theory is that they do the simple object level culling, identify which meshes they need for the next X frames, load those into GPU memory, run an optimizer on that in a compute shader (something similar to Graham Wihlidal's work in Frostbite), then they just render with these optimized buffers in some way. They could rerun the optimization when needed. But still this would use stupid amounts of bandwidth, loading up the source mesh data every time a reoptimization is needed. The other thing they mentioned that there are no draw call limits, that for me implies some kind of indirect draw system (maybe something similar to this?). Hopefully they do somekind of proper technical presentation on the whole system somewhere, because it looks really interesting.

Edit: I just noticed that Graham currently works at Epic, now I'm tempted to ask him on twitter if this stuff is related to his previous work, but I hate bothering people with questions like this.

Advertisement

Extremely impressive ! It may be time to leave the tedium of developing an own framwork behind …

A few days ago i downloaded the unreal engine 4. It uses 40gib, that's more than my OS !

I remember brushing over on those links long ago. The thing that is odd, is if you say you have a statue that you want to render, you apparently have no LOD's for it (or so they claim). Then you have to process all those millions of triangles down to something more manageable, even if it can be used for the next 100 frames or so. If you are going to reduce that down to some lower set, you have to compute some kind of new interpolated normal for your output mesh vertices.

I suppose if you did some magic and just baked stuff in world space as it comes into frame, you can get rid of the need for any tangent space baking, therefor any LOD's computed wouldn't need a new normal map bake due to the underlying LOD change in tangent space vectors. This would allow any auto-LODS to sample diffuse/normal/etc with the same normal mip-chain at least.

NBA2K, Madden, Maneater, Killing Floor, Sims

Extremely impressive ! It may be time to leave the tedium of developing an own framwork behind …

Right, I don't even want to dev any more lol. In fairness though, GPU's are faster. Pushing more triangles is not super hard and with scanned data and more memory. Making things look nice is not as hard. This may be a little bit ahead, but at the end of the day, I can write a naive renderer and feed it big textures and big meshes and if we set aside lighting, we could get somewhat similar results in geometry.

I have a quixel license and I'm going to snag the assets they just released for this demo. At least I can use those in my game but will be fun to tinker a little bit when I get more time to just play with rendering.

NBA2K, Madden, Maneater, Killing Floor, Sims

Well, my good old GTX 970 clone renders 4 million triangles from a lodded terrain at reasonable spped with very basic lighting. But look at the lighting in the demo ! I haven't watched the whole hour, but it seems like they're doing at least partial ray-tracing/-marching. Maybe accelerated with sampling a few hundred or thousand points of the image and interpolating between them, speculating, idk …

Advertisement

dpadam450 said:
I suppose if you did some magic and just baked stuff in world space as it comes into frame, you can get rid of the need for any tangent space baking, therefor any LOD's computed wouldn't need a new normal map bake due to the underlying LOD change in tangent space vectors. This would allow any auto-LODS to sample diffuse/normal/etc with the same normal mip-chain at least.

I wouldn't be surprised if baked normal maps just weren't supported. I mean if you can cram enough triangles through the GPU, technically they aren't needed. In the demo they even mention at the statue part, that there are no baked normal maps in use, which is kinda logical, if you can collapse down the geometry in runtime, you can “bake” the vertex normals that were culled away into a buffer, and use that during lighting. Basically you get triangles big enough that the GPU doesn't scream bloody murder, and preserve some of the higher frequency detail too.

After compiling and starting, which all in all took several hours, the engine has swollen to 76 GIB of bloatware. That's more than double of the whole OS including all sorts of software and dev libraries. The editor start takes more than a minute, it constantly polls the network, the online help has not a word on linux after the installation so far. Creating a new blank project takes minutes (more than 1).

Can anyone confirm these observations or have i got a monday build ?

dpadam450 said:
So I don't know how you could LOD.

hmmm… just culling / merging the triangles that are smaller than a pixel and done?

I must be a fool to work on LOD for so long times then.

No idea about GI either. They have infinite bounces so they must do some caching.

I would assume they use a common acceleration structure for both geometry and lighting. For culling, precomputed merging so LOD, and for compute tracing GI. (According to Digital Foundry no RT HW is in use)
They could use this acceleration structure to cache radiance in worldspace. This would violate the claim ‘GI form millimeters to kilometers’, but maybe they mean either small at fine or huge at coarse resolution. The demo uses obviously coarse resolution - smaller details do not interact, visible at the and on that shrine thing which has some handles that do not cast soft shadow like they should.

Also, their lag is huge. Indicating they use RT and can't update all GI per frame.

I'm unsure if the character contributes to GI or if it is only a occluder, or just a receiver with capsule shadows. But i assume the lag is too big to support characters.

Maybe they also use coarse set of probes, but much finer than any current approach in this direction. They could cache in those probes, so storing radiance in acceleration structure would not be necessary, and millimeter claim would hold although it's sampling coarse information then. Software DDGI, basically.

But they need something to have reasonable fast tracing in any case. Voxels? BVH? IDK.

However - impressive and finally truely next gen.

I doubt the lighting calculations are done completely on the geometry end of the pipeline, and even without a spacial data structure the number of rays can be drastically reduced by sampling and interpolating. I am sure one day we'll know :-)

This topic is closed to new replies.

Advertisement