Juliean said:
No, its just dramatically easier to introduce bugs when using new/delete, as forgetting to delete something won't even be noticed until you eigther run out of memory or end of having some resource locked permanently.
That's no different as having loops using reference counting pointers. You get a memory leak and there is nothing that pops up and tells you that you have one.
It's also about all the places where you have to write the deletes that is an issue. When a class member is a raw pointer that has to be deleted, you have to write that deletion in the destructor, copy-assignment operator and move assignment-operator (if your class has those), With exceptions, you have to always explicitely write some catch that deletes a temporary variable at that point where its currently “owned".
unique_ptr just takes care of that automatically.
First, my original argument was not smart pointers are bad. My code is full of them, and I've been using them almost as long as I've been using C++, which is over 30 years. There is some overhead which is not a big deal. If you are just using them inside a class that's one thing. But once you start using them elsewhere, or passing around their data, you end up significantly changing some things. Let's stick to unique pointers. You can't pass them down to a function directly. You could pass a reference to them but now you are dealing with another level of indirection every time you use them. And of course you can't copy them freely either. A GC handles all this without the programmer thinking about it, albeit with some performance penalty in many cases.
So its not about having bugs or not, its about how easy is to have bugs VS avoiding them automatically.
Unless you use them in a pretty restricted way, I don't think you are avoiding much automatically. Let's take your example. You have 3 Person objects. As long as you reference them through your person vector you are fine. But if you get one Person object and pass it around, you either have to risk dangling pointers or suffer one level of dereferencing every time you access them, since to be safe you would use a reference to your smart pointer. Maybe that's OK, or maybe not. But if you only learned C++ at a cursory level, you are not equipped to make a good choice. Worse yet (and I have seen this), programmers attempt to pass ownership around through every function call by using std::move, because someone convinced them that all pointers should be owning pointers to avoid the slightest risk of a bug. This is of course at the cost of truly ugly confusing code.
So again I'm not against using unique_ptr, but I think you need to understand what it's doing so you can use it wisely.
I don't seen an issue here. unique_ptr doesn't strive to solve all problems related to when you pass pointers/references to another function. It just replaces all your new/deletes. Thats it. The rest of your application can stay the same. Yes, that means it doesn't remove the danger of dangling references, but thats ok. I don't think its intended to do that. At least I don't treat it like this. It removes a lot of bugs and overhead associated with manual memory managment, while keeping the performance/freedom of being able to just pass raw pointers around.
I'm going to submit that dangling pointers cause far more bugs than forgetting to free something. Sure, that's a memory leak, but there are even tools to find that. Dangling pointers are not as easy to track down and are just as serious, worse I'd say.
C++ does have one main appeal though, which I appreciate a lot: Zero-overhead abstractions. Yeah, things rarely are actually completely zero-overhead, but things like std::unique_ptr are as close as it gets. There is like one additional instruction for non-inlined sink-functions (where you need to transfer ownership), and actually no overhead at all when using the object stored in std::unique_ptr vs the raw pointer. So performance in this regard is a non-issue.
Again I don't' really have an issue with unique_ptr, it's mainly with shared_ptr and that's mainly because of the implementation. I have my own version of it unique_ptr, but that's because my library has had it for years.
There's two main gripes I have with this:
a) At point do we draw the line? If what you say is true then even using oldschool-c++ still does not give you the full picture. Unless you have learned ASM and looked at what the compiler procudes and how that code is executed by the CPU, you don't fully understand whats going on in your code. But, I'm not saying you should need to learn it. My point is rather, it doesn't matter to the average user. Especially a beginner.
A beginner won't be a beginner forever. If you teach them things like “use shared_ptr everywhere”, you are teaching them something that can and likely will, break their code at some point. Even if it doesn't, they will incur reference counts all over the place. If you learn C++ bottom up that will stick in your head and that allows you to make design decisions accordingly, even if you are using modern C++. You don't need to understand the ins and outs of every architecture, but most computers are similar these days so understanding things on a C level is pretty beneficial.
b) It's also a matter of the amount of information you can realistically learn. Nobody is going to learn c++ and understand everything. In fact, I've seen lots of beginners in every language just try to find examples that closesly fit what they want to do and copy/paste and modify it. Why is that? Its because they are overwhealmed. And no, thats not overwhealmed by modern features (the same thing happened back in the day), its overwhealmed by how complicated simple things like a loop are. Nobody is going to understand what that vector-iterator loop does. when they first learn c++, nobody is even going to remember how to write it. Nobody will exaclty understand the intrinsics of dangling pointers, memory-managment etc… in the first program they write, just because they have to new/delete everything themselves.
Thats my main point. I agree that it helps in c++ to know details, especially as an intermediate/expert, but you have to walk first to be able to run. And a lot of the modern tools help you to do that. But then again, you belive that this makes it harder to work with the language which I don't think is the case at all.
I guess my argument to that is, I started learning C++ in the late 80s early 90s. At that time, it was just another language. People came in from C, Fortran etc. Nobody really complained about how hard it was and if I was asked to help debug something it was typically a reasonably difficult bug to find. Now I see constant cries for help even for things that should be pretty simple. Modern C++ is an attempt the tack wiz bang feature on and inherently low-level language. That works well enough if you understand it at least at the C level. But many people don't. And I can see the difference because I have tutored a lot of people, past and present.