Advertisement

Relationship between resolution and polygon count

Started by September 20, 2024 08:27 PM
26 comments, last by JoeJ 3 months, 1 week ago

MagnusWootton said:
These days it is 1 triangle a pixel, output from open scad (its a procedural script cadder) puts out 1 triangle per pixel. Video cards can handle it these days.

They can handle it eventually well enough for some application, but it is ineffcient. If you render one triangle a pixel, it's faster to use a software rasterizer in compute, than relying on (current) hardware acceleartion.

So there is a pending decision to make: Either GPU vendors implement efficient micro triangle rasterization in hardware, or they deprecate rasterization all together and remove those ROPs from future GPUs. I guess we'll see the former.

It's also pretty clear that one triangle per pixel does not really make sense anyway, since if we want that, it's much faster to render single points instead complex triangles.

MagnusWootton said:
But theres an issue with floating point, when the triangles are that close to each other, they tend to output lines instead of triangles in an STL file and the normals actually become non computable. so theres issues with it thats fer sure.

Precision limitations of some file format are irrelevant. That's probably caused from a too low setting of digits for text based data, but meshes are stored as binary data.
The binary data is usually also quantized for compression, and this can cause close vertices to merge and so triangle areas and normals become zero, but again that's a matter of choosing the right settings.

Precision limitations do not hinder us to achieve crazy levels of details. What hinders us is: 1. storage costs, 2. complex software, 3. HW performance.

MagnusWootton said:
If u have it 1 triangle a pixel then u can do amazing self shadowing, it works good on parallax occlusion mapping as well. Then if u multisample ontop of that (If u make it say 100 triangles per pixel) it actually blends all the microshadows onto single pixels and u end up with something similar to a BRDF result with microfacets.

Yes, but that's a good example of ‘multisampling is too slow, and we should prefilter instead.’
The works on physically based shading did just that: They handle microfacets with a single roughness value, and they fit an analytic function to measured real world results to approximate things like self shadowing of micro facets.
Knowing those simple functions, and having a texture with a roughness channel, we can model this effect very well with just one sample.

Oh i thought i'd need something like 64k resolution (4K with enough anti aliasing to have 2 bil pixels) haha

So you mean i don't need an extreme resolution to fix AA?

Yeah my polygon count is for everything on the “map”

Thanks Magnus! :D

Advertisement

@JoeJ Hi Joe, you delivered legit new info! Where can i read about all this stuff?

Newgamemodder said:

Oh i thought i'd need something like 64k resolution (4K with enough anti aliasing to have 2 bil pixels) haha

So you mean i don't need an extreme resolution to fix AA?

Yeah my polygon count is for everything on the “map”

Thanks Magnus! :D

I wouldn't call rendering on 64k resolution mandatory, but It could be a game feature if u wanted it to be.

Do u mean Anti-aliasing? But there's something else u can do, mega resolution ambient occlusion! (Thats what I was thinking of.)

Mega resolution Ambient occlusion would work good, it just being a screen compute means u can actually do huge maps with it real time if u've got a nice new video card, but it is going to cost you so much of the performance budget to get it. Would look really good on grass.

Anti-aliasing and multi-sampling does give the game a more realistic look, you get rid of all the hard boundaries, shadows come out alot nicer, and the edges of the shafts of light look alot better, but it costs u to do it a fair bit.

Going to all the trouble to convert all the high poly models to low poly with normal maps would be faster than rendering just the raw high poly models, but u dont actually have to, u can actually render the hipoly models these days, but u get a lot less of them. If you want a horde of hipoly looking monsters its still the best to do the low poly convert still if u could be bothered.

Newgamemodder said:
Oh i thought i'd need something like 64k resolution (4K with enough anti aliasing to have 2 bil pixels) haha

My impression is that you do the wrong counting and math.
You want high triangle numbers for high detail. Then you calculate how may triangles this will be, and then you conclude:
‘I have X triangles, and i want full detail so i want only one triangle per pixel, and thus i ideally have a display resolution of X pixels.
The italic part is wrong for any 3D game with perspective projection, becasue the size of triangles relates to distance. Distant stuff becomes smaller and this breaks your assumption.

The correct conlcusion is:
‘I have X triangles, and i want full detail. At a point where the triangles become very small and cover only a fow pixels on screen, i want to switch my model to a lower level of detail, so my triangles stay larger than Y pixels.’

Y is usually recommended to be about 10 pixels due to the 4 threads issues, but with something like Nanite it can become indeed as low as 1 pixel.
But more important: Screen resolution does not really matter much regarding the detail of content. It matters much more how close the player can come to the geometry. If he moves close enough, he can see individual triangles and texels even with a 320 x 200 resolution.
And this, and only this is the real reason why we want high resolution content. It's about distance first, screen resolution is secondary.

As a content creator you can almost ignore things like screen resolution and choice of AA method, since that's something the end users decide. They want a 4K screen or not, they tweak their DLSS settings as they see fit, or they prefer MSAA eventually. And ideally they can pick what they want.

We became almost independent from screen resolution already back then, when we switched from 2D games to 3D games. It matters for HUD elements, but not so much for the 3D scenery.

shumtaka said:

@JoeJ Hi Joe, you delivered legit new info! Where can i read about all this stuff?

Well, it's all generic graphics programmer topics.
With the rise of game engines this is no longer as much discussed on forums like this as it was before, but many gfx programmers still do blog posts to share experience.
This is a good source linking to recent posts, papers, videos etc.: https://www.jendrikillner.com/post/graphics-programming-weekly-issue-358/

Newgamemodder said:
Was thinking High poly LOD0 and LOD1 but low poly shadow mesh and collision

I forgot to comment on this one.
For physics collision we need a low poly representation, yes. (I've learned the hard way: Really very low!)

But for shadows we want the same geometry that we use to render the image. If we use a lower lod, shadows become inaccurate, and they miss to capture fine details. Up to the point where those details become unrecognizeable and just a waste.

Personally i can observe this very well in nature, looking at the mountain out of the window.
On a cloudy day with direct sunlight occluded, the surface of the mountain looks flat, low contrast, and not detailed at all.
But if the sun is visible, the hard shadows expose all those tiny details of the rocks or trees on it. Only then it looks detailed at all.

That's why Epic introduced a virtual shadow maps system together with Nanite. This did not receive as much attention, but is maybe equally important. It tries to divide the SM into many smaller parts, so they come closer to an ideal one SM texel for each screen pixel ratio. If we have many small SMs at different resolutions, hardware rasterization again becomes inefficient because it's meant to render just one view at high resolution, not many views at varying resolution. So the compute rasterizer is very benefitial to generate SMs.

That's often overlooked form people criticizing Nanite for it's performance overhead.

Still, on the long run we would love to ditch shadow maps, replacing it with accurate raytracing.
But currently that's not possible, since RT APIs are not flexible to handle any fine grained LOD solution.
It would work however with discrete LODs. Those can be traced without issue.

Advertisement

Joe do you think i can increase my polygon count?, it would then be 3 billion not counting projectiles or particles? so maybe 5-6 billion not counting LODs and culling and what's in the frustum. Does this change anything? Hopefully my LOD0 is what my artist says haha. Before you ask why i'm planning on 3D scanning physical models hence why it's such a high polygon count.

Honestly i have no idea how many of the triangles will be in the frustum. I thought that if it handles my worst case scenario it can handle everything…

You know me and my polygon count calculations ;D

Sorry I'm slow so i don't need massive pixel resolution?, i just want to know how much without having jaggies :P

Regarding low poly shadows i was thinking the space ships have “trenches” and i were to copy the hull only instead of all small details in the trenches

Newgamemodder said:
Joe do you think i can increase my polygon count?, it would then be 3 billion not counting projectiles or particles? so maybe 5-6 billion not counting LODs and culling and what's in the frustum.

Well, for some real world data you can look here: https://github.com/Activision/caldera

They say 2 billion points. Not sure what they mean, but seems 3 billion is practical.

Newgamemodder said:
Before you ask why i'm planning on 3D scanning physical models hence why it's such a high polygon count.

Yeah, scanning means more polygons than manually modelled stuff, even with automated reduction.
However, scanning what? Real world miniaturemodels of spaceships? Spaceships are hard surface models, and scanning may create too many articats. Modeling them from scratch with the computer might be less work than tedious manual cleanup. (No experience - just a guess)

Newgamemodder said:
Sorry I'm slow so i don't need massive pixel resolution?, i just want to know how much without having jaggies :P

To avoid jaggies, you always need the same amount of AA samples per pixel, no matter what's the resolution. Assuming the 4K display is larger than the HD display, the pixels are equally large on both.

To see all the glorious detail, you need to walk closer to the wall with a HD display than with a 4K display.
So it does not matter. The player can observe all the detail no matter what.

But ofc. if you marketise 4K support, people assume the maximum detail of content is higher because of that. Although technically that's not necessary. Even with ‘only HD content’ the 4K player still has the advantage of seeing more details in the distance.

And you can't decide about screen resolution anyway. You can only make sure your game works with any resolution.

Regarding low poly shadows i was thinking the space ships have “trenches” and i were to copy the hull only instead of all small details in the trenches

If the trenches are in shadow they have high contrast to the lit surface. And this contrast gives the gritty details.
If the trenches are not shadowed but equally lit than the surface, the trenches will be barely visible.

I expect spaceships as seen in Star Wars profit at least as much as my mountain example.
If we take something like a car, that's a smooth surface causing no detailed self shadows, so here the low poly shadow caster would be good enough. Enterprise from Star Trek is also smooth and would benefit less from having accurate shadows.

It depends on content ofc., but you should try it out.

Are 2 billion points with or without mesh shaders?

About the 3 billion triangles that's without projectiles or particles nor without LODs and other types of culling :D so i guess that it's possible then. Should i still have a massive detailed shadow and should i count it in my calculations? and i assume the total count can be higher on a lets say 6090 or a new titan.

The problem is i don't know how many will be in view and how far away/close they are nor how many triangles LOD0 and shadow will have… Right now with 5 million per ship WITHOUT shadows it's 3.6 billion not counting projectiles or particles, also it still doesn't count lods or culling…

With HP shadows it will probably be 7.2 billion…

Sounds really stupid but the high shadow count is giving me anxiety :/

Regarding shadows what if it's a star destroyer?

Aha so you mean the samples of AA like 4K = 8M * 8x8 is for all resolutions, i understand that now! :D

It doesn't matter f it's HD that would be 1K = 2,073,600 * 8x8

So i should use 8 taa?

Newgamemodder said:
So i should use 8 taa?

I used the 8x8 samples only to compare it with standard multi sampling.

But TAA is not a spatial average, it is technically a temporal, exponantial moving average.
Works like this:

alpha = 0.05 // usually a low value
newPixelColor = previousPixelColor * (1-alpha) + currentSample * alpha // preserving the former sampels with a high weight

So it's actually infinite samples, accumulated over time to get some average.

The problem is, since camera and objects move, it is difficult to find the previousPixelColor, which may come from another pixel.

So the quality of TAA mostly depends on how well we can calculate this (using motion vectors), and how well we can decide to reject previous samples in case they are no good fit (e.g. because of disocclusion).

So there is no number that could describe TAA quality. It's all about implementation details.

Newgamemodder said:
i assume the total count can be higher on a lets say 6090 or a new titan.

Sure, but it seems the average player compute power goes down, not up.
People buy more Switches, handhelds, or maybe soon slick mini PCs than overpriced and overspecced dGPUs.

Moores Law is dead, and HW performance stagnates. I don't expect big speedups anymore, at least not affordeable ones.
What we do get instead is increasing variance: A 4090 is 60 times faster than a StemDeck, according to teraflops.

So we need to make our engiens more scalable, and the enthusiast niche is no indicator anymore to predict future HW power.

This topic is closed to new replies.

Advertisement