It’s like there’s a problem with being explicit. Parental warning: EXPLICIT language LOL
does an advanced c++ programmer know all of C++ features
The obfuscation argument surely goes both ways.
If we use auto for everything (due to 'lazyness') code becomes harder to read because context is missing.
If we write out lengthy and complicated types, the logic becomes obfuscated from too much clutter. E.g.:
std::unordered_map <uint64_t, std::pair<uint64_t, float>> map;
//...
auto result = map.find(myID); // i may not precisely know what my result is, but it might not be that important in detail, and i may still prefer that over spelling it all out
That's probably the main point where it's a matter of taste and opinions vary.
But i guess we all agree on ‘lazy auto’ is bad, and there are cases where auto is not optional but just required.
- Read references and watch talks.
- Read other people's code.
- Read more of other people's code.
- No, even more. Read MASSIVE amounts of other people's code, from all different kinds of places.
- Write your own code, and iterate on it until you actually understand it.
- Goto 1
taby said:
Otherwise, pointers are great.
[citation needed]
Pointers are often useful. Pointers are also a consistent source of foot-guns. Apart from a handful of extremely well-tested low-level libraries, I'd go so far as to say most C++ code using raw pointers should be considered buggy.
Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]
swiftcoder said:
I'd go so far as to say most C++ code using raw pointers should be considered buggy.
So I should quit using the standard library?
🙂🙂🙂🙂🙂<←The tone posse, ready for action.
Either that, or you consider it is one of those…
swiftcoder said:
extremely well-tested low-level libraries
I often use pointers for optional function parameters. If you give null pointer, the function does not calculate related extra stuff.
fleabay said:
So I should quit using the standard library?
Thus the “well-tested low-level” exemption in my assertion. Although, you'll find uses of raw pointers in the standard library (at least in the interface thereof) have reduced steadily over time.
JoeJ said:
I often use pointers for optional function parameters. If you give null pointer, the function does not calculate related extra stuff.
That's a reasonable use of pointers, but still can be error prone if exposed in public APIs (alignment requirements are especially fun). It's a shame the standards committee punted on std::optional<T&> :/
Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]
fleabay said:
So I should quit using the standard library?
The standard library has make_unique<>
and make_shared<>
which should take care of 99% of your pointer needs.
That being said, when decoding or encoding network packets, a uint8_t *
is generally the better option. With enough helper functions and safe wrappers, it can even be made to not allow remote exploitable code by default! At that point, we're building our own “well-tested low-level library” though! After all, those things must come from somewhere :-)
JoeJ said:
I often use pointers for optional function parameters.
You can use optional<>
for this.
However, what the true baller C++ programmers want you to do, is use template specializations/deductions to generate optimal code in each of the present/absent combination cases. And here is where my frustration with C++ starts showing: the goals of the library writers and standards developers are slightly different from my goals. Most of them write code that runs on Xeon datacenter servers with terabytes of RAM. They have distributed compile farms, and any 0.1% savings in runtime they can achieve saves their companies millions of dollars every month. They're totally Stockholm Syndromed into the idea that no cost is too high for a programmer in the pursuit of full efficiency.
Games need high frame rates, true, but we also need to actually ship tons of content using a smallish workforce. All the complexity that was added with rvalue references (and thus xvalues, glvalues, and so on) may allow you to write certain classes of templates easier, and save one instruction of moving a value in the edge case, but all that complexity also makes me producing working, bug-free code take longer. And templates increase compile times, and frequently also increase executable binary size, which has its own follow-on costs – loading plugins takes longer at runtime, invoking a tool takes longer per invocation, and so forth.
hplus0603 said:
fleabay said:
So I should quit using the standard library?The standard library has
make_unique<>
andmake_shared<>
which should take care of 99% of your pointer needs.
IMHO that sounds like a nightmare. I've seen code like this where unique_ptr ownership is passed around with move. Smart pointers should be either top level pointers or used in class objects. Even unique pointers have overhead, epically if you are going to pass them around. There is nothing wrong with passing raw pointers as input parameters to functions.
I personally think shared_ptr is just bad. First there is the double allocation. But let's say you are using make_shared and so you can get rid of that. You still have the double pointer size which is now useless since the control bock is now with the object. Finally there is no way to turn off thread safety and if you aren't using weak pointers you are sill paying for the weak counter. In any place you really need reference counting pointers (i.e. large complex DAG like structures) shared_ptr is a huge amount of overhead.