Advertisement

does an advanced c++ programmer know all of C++ features

Started by July 24, 2021 06:29 PM
75 comments, last by hplus0603 3 years, 4 months ago

Should play from 12:09

🙂🙂🙂🙂🙂<←The tone posse, ready for action.

hplus0603 said:
but all that complexity also makes me producing working, bug-free code take longer. And templates increase compile times

My experience too. If i would handle such options with templated specializations (which i considered), the ‘software architecture’ part of work becomes just tedious / distracting.
And while i only work on tools currently, not sure if i would take that burden for each and every runtime code either.

Why should we optimize an ass out, if later artists generate shaders from nodegraphs, in game GUI runs with interpreting html and CSS, or whole game logic is implemented with scripts? : )

(Did not know about std::optional though. I'm really slow with catching up and only adopt what i see occasionally in other peoples code)

hplus0603 said:
They have distributed compile farms, and any 0.1% savings in runtime they can achieve saves their companies millions of dollars every month.

That's really interesting to hear. I always assumed some sloppiness about performance is the norm quite everywhere nowadays.

Advertisement

hplus0603 said:

However, what the true baller C++ programmers want you to do, is use template specializations/deductions to generate optimal code in each of the present/absent combination cases. And here is where my frustration with C++ starts showing: the goals of the library writers and standards developers are slightly different from my goals. Most of them write code that runs on Xeon datacenter servers with terabytes of RAM. They have distributed compile farms, and any 0.1% savings in runtime they can achieve saves their companies millions of dollars every month. They're totally Stockholm Syndromed into the idea that no cost is too high for a programmer in the pursuit of full efficiency.

Games need high frame rates, true, but we also need to actually ship tons of content using a smallish workforce. All the complexity that was added with rvalue references (and thus xvalues, glvalues, and so on) may allow you to write certain classes of templates easier, and save one instruction of moving a value in the edge case, but all that complexity also makes me producing working, bug-free code take longer. And templates increase compile times, and frequently also increase executable binary size, which has its own follow-on costs – loading plugins takes longer at runtime, invoking a tool takes longer per invocation, and so forth.

Maybe it's a problem of different users and language maintainers pulling in different directions. I know of developers who mainly want to write beautiful and maintable code, those who want to write performant code, and those who just want to ship features. These three use cases generally put different requirements on the code and definitely lead to to different approaches. It seems C++ allows programmers many ways of solving the same problems (partly because of legacy reasons), which make it so that these approaches branch off even further than if the language didn't allow all these options.

The only companies that have lots of C++ developers with lots of credibility and lots of time/money to attend standards meetings, are Facebook and Google. Amazon could but is mostly Java; Apple doesn't believe in playing with others; Netflix and Uber and the rest aren't big enough despite what they want to think. (Plus they're not as heavy into C++ from what I can tell.)

An engineer whose job is to optimize ad serving speed across the world, where the software literally uses a billion dollars of electricity, has a different incentive than an engineer who needs to make the shiny rings go “bling” when you hit them :-)

Why should we optimize an ass out, if later artists generate shaders from nodegraphs, in game GUI runs with interpreting html and CSS, or whole game logic is implemented with scripts? : )

Think of it this way: If we hadn't optimized the shaders and html and css and so on, they couldn't be doing that, and they'd be less effective at making games that users enjoy! And shader node graphs aren't so bad; from what I understand, the shader grahps are quite similar to the data structure that the graphics driver builds internally to be able to optimize the GPU-executable code they in turn transpile into. If anything, it's the abstraction of GLSL or SPIR-V that gets in the way ;-)

enum Bool { True, False, FileNotFound };

Even unique pointers have overhead, epically if you are going to pass them around.

There is a lot of bullshit in the standards and libraries to make it so that they really don't. Assuming you have the bleeding edge of compiler and standard library, a unique_ptr<> should be as efficient as a raw pointer, and give you some ownership guarantees. Still no Rust, of course.

There is nothing wrong with passing raw pointers as input parameters to functions.

That kind-of depends on how many times you've burned yourself with object leaks, or use-after-free, or that class of bugs. Than again, the really cool pointers are to objects that live “for the current frame” in an arena that gets re-used from scratch in the next frame, with no “freeing” going on at all. Saving one of those pointers in some struct will cause all kinds of delightful bugs, and you really need a restricted-effects environment like Haskell to programmatically avoid that. (And Haskell isn't that great for games, for other reasons.)

enum Bool { True, False, FileNotFound };

hplus0603 said:

Even unique pointers have overhead, epically if you are going to pass them around.

There is a lot of bullshit in the standards and libraries to make it so that they really don't. Assuming you have the bleeding edge of compiler and standard library, a unique_ptr<> should be as efficient as a raw pointer, and give you some ownership guarantees. Still no Rust, of course.

I've seen a couple videos on the overhead of unique_ptr, however your right in that it isn't really much and the trade off is worth it. The problem with using it everywhere is that it is supposed to signify ownership, and when you pass an object into a function for some processing, that function does not really own the object. You can always pass a unique_ptr by reference but now you are creating an extra complication for no other reason than to stick to some mindset. I agree with what Herb Sutter says in the video Fleabay posted. However uniqeu_ptr itself isn't bad, if used correctly.

My main issue is with shared_ptr, which I consider just bad 90% of the time, and this is coming from someone who has used reference counting for 30 years. An intrusive reference count is a better solution for the vast majority of cases you need it.

Advertisement

Gnollrunner said:
it is supposed to signify ownership, and when you pass an object into a function for some processing, that function does not really own the object

unique_ptr<Thing> const & is a pretty good way of passing that around.

Gnollrunner said:
no other reason than to stick to some mindset

Conventions are there for a reason. If the reason gives you value, use them. If not, don't. When the value is “I have a clear way to signal ownership where the compiler will tell me when I try to do it wrong,” that's pretty reasonable value to me. If you pass a bare pointer, the compiler won't tell you you got it wrong when you call delete on it.

Gnollrunner said:
An intrusive reference count is a better solution for the vast majority of cases you need it.

std::enable_shared_from_this<> is pretty close for those use cases. And you do get weak_ptr to go with it, which you'll have a hard time implementing well in an intrusive-counted system like COM.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement