Consider the following (real) scenario: there is a hardware vendor that ships a device with closed-source binary drivers written in C++ and compiled with VS2010. The codebase for the parent project requires C++11 / VS2013. How do you make this work when you don't have access to the original source? (Answer: you write an IPC shim to isolate the driver process from your application.)
A closed-source driver with a C++ interface is madness. Regardless of what the driver is written in, it needs a pure C interface. Offering an optional C++ interface might be an option but I would flatly say a C++ interface for anything you cannot recompile yourself is completely useless, whether we are talking about a library or a driver.
Were the drivers written in C, chances are this shim would have been unnecessary in the first place.
What the drivers are written in is irrelevant. As long as they supply a C interface, they can write the driver in
Brainfuck for all I care.
You might think this is an academic concern. In which case, I implore you to check the download page of any popular C++ library and check their binaries. They have to ship separate versions for each major compiler release. Gigabytes of waste, all because of a supposedly "efficient" language!
I don't see the problem. Generating the builds is pretty much a no-brainer since it can be automatized pretty easily. Apart from that, I see no way around it. Even with C you have to deal with library boundaries (usually that means opaque pointers and pairs of alloc/free functions). C++ allows you to ignore all that provided you can make guarantees about the runtimes and compilers involved. Usually you don't switch compilers nor add libraries very frequently. The cost of downloading the right build or building a library yourself is vanishingly small compared to having to deal with each libraries management stuff yourself all the time. I'd rather take C++ and proper RAII or similar concepts (like Qt's QObject ownership semantics) over doing all of that by hand every time.
I don't see a way around that either. We can move everything into some kind of virtual machine (like for example C# or Java). That comes with its own problems and since you are so concerned with every last bit of performance C++ might cost you, that does not seem to be a viable solution.
We could also consider attaching a cleanup-function to every non-trivial piece of data we hand over library-boundaries. Of course that would mean at least one sizeof(function pointer type). And we would have to do something like a virtual function call whenever such a piece of data is destroyed.
There are workarounds, which boil down to either rebuilding your whole dependency tree using the same compiler (which can easily take hours) or pinning your compiler version (in which case, you'd probably still be using VS2008 or VS2010 without C++11 support.) Or you could be using a better language, which wouldn't suffer from this problem in the first place.
As I said, picking C won't really help because it simply shifts the burden of a lot of work which could be done automatically to the developer. It does not really solve the problem either because C has the same library-boundaries problems C++ suffers from.
Same goes for vectors/lists/etc - most of the time that there's performance issues, people are just writing silly C++ code, where the equivalent C code would look absolutely horrible and make their mistakes obvious.
Which is actually one of the issues of C++: every problem can be solved in multiple ways, and the *obvious* way is often the *wrong* way. It takes years of experience to learn the right way, and you must be a very lucky person indeed if your team consists solely of people who can tell good from bad C++ code. (In which case, I'd love to work with your team. Drop me a line )
I don't see how being able to solve problems in multiple ways is a bad thing. There are always general solutions and highly specific solutions. For example std::shared_ptr is a very general solution. It can manage any pointer you hand it (provided you are happy with the default deleter or specify a correct one). If you just need to manage the lifetime of something, it's doing the job extremely simply and well. If everything everywhere is an std::shared_ptr you are probably in for some problems (cycles, the cost of copy-constructing/destroying std::shared_ptr instances all the time) and you should have written something which fits your specific problem.
As always, good and experienced developers are a limited resource. As always, junior programmers need to be mentored/supervised and/or working under good coding guidelines watched over by a senior programmer. After all, otherwise junior programmers will be a threat to a project in any language.
At my last job we used a template for pointers, which acted like (and had the same cost as) a raw pointer, but during development builds it would alert of cases where it had been used when uninitialized, or where it had been leaked / not cleaned up. That kind of template has absolutely zero cost in the shipping build, but simply enhances your engineering practices.
A reasonable person might counterargue that (a) uninitialized pointers shouldn't even compile and (b) if they did, your runtime should at least be able to inform you of this error when you compile with the, I don't know, "CC --inform-me-of-memory-leaks" option.
How is that supposed to work? C (and by extension C++) allows you to do all kinds of weird stuff with a pointer. For example converting it into an uint_ptr, storing it in some structure and passing the structure to a (completely opaque) API (like for example the Win32 API). The pointer could then be retrieved at any arbitrary point in time by a different API call and freed. How do you track that automatically at compile time?
Of course, nothing can ever be that reasonable in C++, so the solution is:
(a) to force every project to re-implement "template<typename T> class my::Ptr<T>" from scratch, because that's a good programming exercise;
(b) destroy their build-times in the process, by recompiling every single instance of Ptr<T> and discarding the compiled code during link time;
(c) destroy any hope of interoperability because every project now has a different, incompatible "template<typename T> class your::Ptr<T>" implementation.
(a) If something turns out to be very useful, it can be found in the standard library, popular libraries like Boost or the company's private libraries.
(b) I work in a quite big codebase and while there obviously is a cost with it, I would not call it significant. That aside, a second or two extra time per build would be acceptable if it helps to prevent a bug from happening that could take days of annoying debugging to find. Of course my link times for non-invasive changes are smaller than that...
(c) I see no reason for something like that to be ever in the public interface of a project.
Bonus points if your project redefines primitive types and has its own string class, too.
Everything wrong with C++ condensed in a single concrete example. But yeah, "zero cost in the shipping build" indeed. Cheers!
I can see a lot of good reasons why you should not use C++ for everything (there is a reason after all other languages still exist). You, however, seem to be obsessed with turning everything in the language into a lemon, biting into it and then sucking it dry.
Personally, I enjoy working with C++ for my hobby projects (despite the fact that the majority of my work also deals with the language). That was not always the case. I have looked at quite a few other languages in the meantime. Worked with a few quite a bit. At least got some understanding with others. But after a lot of time, I returned to C++ for my hobby projects. And I actually enjoy it. With nearly 20 years of experience in the language, modern compilers and proper library support, it can be fun.
I'm not saying everyone should work in C++. I'm not saying every project has to be done in C++. But I also object to the way you are completely ignoring everything good about the language and turning it into something bad.