JoeJ said:
A decade ago, STL for gamedev was considered a no-go due to bad performance. Has this changed? And if so, why? Is current HW so powerful we can just waste it? Seems not, as performance issues are seemingly the main concern and critique on recent games.
I don't work directly in games industry, but adjacent to it, but I'd guess that still holds true today. The main issues with STL are design-related and unlikely to be fixed. The main problems I've seen are:
- Exceptions are used everywhere, which bloat binary and have other downsides (code can get slower because less fits in the instruction cache).
- Support for custom allocators for containers is not ideal. It's not easy to have e.g. a vector that uses a custom generic allocator that allocates 32-byte aligned memory for AVX, primarily because the allocator type must be templated by the type it allocates (terrible design choice), rather than a void* interface similar to malloc()/free().
- Inconsistency in the implementation and performance on different platforms. You can't say for sure you are getting an optimal implementation, you have to trust the platform SDK to do the right thing, which is often not the case. Example: std::vector growth factor may be 2 or 1.5, depending on platform.
- Bad specifications, e.g. std::deque which is hamstrung by the spec to be inefficient.
- wchar_t/char nonsense depending on platform. They should have mandated UTF8 everywhere as the default, and provided built-in conversion to UTF16, UTF32, ASCII.
- std:atomic is non-copyable for no good reason. This drives me nuts because it requires you to write custom copy constructors everywhere, e.g. just to use a class containing std::atomic within a std::vector. It's cancerous.
For these reasons and others I have written my own “standard library” over the years which avoids these issues. I have the benefit that I don't have to stick to specification, and can make improvements to the design at will, without compatibility worries. This iteration process has produced a very nice code base, since I can learn from STL mistakes.
It saddens me to see programmers become lazy and write code in less optimal ways as they become more disconnected from what the hardware is doing. There is definitely a trend in software towards bloat (games are probably not affected so much due to tight constraints). Compare a program (e.g. built-in calculator on MacOS). On my 2013 personal machine with 10 year old OS, the calculator opens in about 0.3 seconds, while my newer faster work laptop it takes 1 second, and then it's not even responsive for another 1 second after that. There definitely seems to be a tendency for programmers to become lazy, add unnecessary abstractions and layers and bloat. Hardware becomes faster, and programmers loosen their belts some more. At least this creates an opening for high-performance software which is many times more responsive than the majority.