MagnusWootton said:
These days it is 1 triangle a pixel, output from open scad (its a procedural script cadder) puts out 1 triangle per pixel. Video cards can handle it these days.
They can handle it eventually well enough for some application, but it is ineffcient. If you render one triangle a pixel, it's faster to use a software rasterizer in compute, than relying on (current) hardware acceleartion.
So there is a pending decision to make: Either GPU vendors implement efficient micro triangle rasterization in hardware, or they deprecate rasterization all together and remove those ROPs from future GPUs. I guess we'll see the former.
It's also pretty clear that one triangle per pixel does not really make sense anyway, since if we want that, it's much faster to render single points instead complex triangles.
MagnusWootton said:
But theres an issue with floating point, when the triangles are that close to each other, they tend to output lines instead of triangles in an STL file and the normals actually become non computable. so theres issues with it thats fer sure.
Precision limitations of some file format are irrelevant. That's probably caused from a too low setting of digits for text based data, but meshes are stored as binary data.
The binary data is usually also quantized for compression, and this can cause close vertices to merge and so triangle areas and normals become zero, but again that's a matter of choosing the right settings.
Precision limitations do not hinder us to achieve crazy levels of details. What hinders us is: 1. storage costs, 2. complex software, 3. HW performance.
MagnusWootton said:
If u have it 1 triangle a pixel then u can do amazing self shadowing, it works good on parallax occlusion mapping as well. Then if u multisample ontop of that (If u make it say 100 triangles per pixel) it actually blends all the microshadows onto single pixels and u end up with something similar to a BRDF result with microfacets.
Yes, but that's a good example of ‘multisampling is too slow, and we should prefilter instead.’
The works on physically based shading did just that: They handle microfacets with a single roughness value, and they fit an analytic function to measured real world results to approximate things like self shadowing of micro facets.
Knowing those simple functions, and having a texture with a roughness channel, we can model this effect very well with just one sample.