Advertisement

How do compute shaders impact performance on the GPU?

Started by September 16, 2018 02:27 AM
2 comments, last by NikiTo 6 years, 4 months ago

Do heavily-loaded compute shaders affect the performance of the other "normal/render" shaders? or do they use a dedicated core?

 

 

Yes to the first, no to the second. In the first programmable GPUs, there were separate hardware cores for vertex shading and pixel shading, but about 10 years ago they all merged into a single generic "shader core" hardware, which runs any kind of shader. 

Pixel shaders, vertex shaders and compute shaders all queue up to have their turn on the same generic shader cores one after another. 

To go more in-depth - Pixel shaders also have sore extra dedicated hardware for writing to render-targets and depth-buffers. Draw-calls also make use of special hardware that rasterizes triangles to determine which pixels need shading. In some rare cases, these special bits of hardware can be the bottleneck. For example, when rendering shadow-maps, most of the work is rasterizing triangles and writing to a depth buffer - leaving the shader cores nearly idle with no work to do. This is why D3D12/Vulkan added async compute, which allows you to run compute shaders at the same time as normal draw calls - allowing the shader cores to stay busy in this example (and getting some compute work done "for free" - time that normally would've been wasted). 

Advertisement

I wonder to what extent GPU manufacturers can add to older hardware support for some features with their updates. I mean, at the moment of the production of a GPU it had not async compute because the term wasn't even existing at that time, but with the last updates async compute is added.

As a developer i don't need a big powerful GPU with lot of cores. But I need a GPU with all the extras.

This topic is closed to new replies.

Advertisement