Advertisement

Error handling

Started by January 20, 2025 03:01 AM
14 comments, last by frob 6 hours, 3 minutes ago

Calin said:
std Vector breaks when the index is out of bounds and it's not a break point, how is it done?

Haha, you can read the source code of std::vector to figure out…

… you can't read the source code?

Well, the reason you can't read the code is that it is so full of debug mode extra checks, the actual function is obfuscated.
So that's how they do it. :D

You should use asserts too. It's very, very helpful.
But for out of bounds checks on array you can indeed just use std::array.
It has some other advantages too: You can copy one arry to the other without a loop or memcopy; You can use STL algorithms, etc.
Unfortunately it's a bit more typing, so personaly i refuse to do the right thing and still use C arrays in most cases when i'm sure i won't do an out of bounds mistake.

JoeJ said:
There is no guarantee on STL specs, but using MSVC with default STL we get the following behavior in debug mode:

In MSVC on debug, specifically, yes, with iterator-debug level ≥ 1 at least:

_NODISCARD _CONSTEXPR20 _Ty& operator[](const size_type _Pos) noexcept /* strengthened */ {
	auto& _My_data = _Mypair._Myval2;
#if _CONTAINER_DEBUG_LEVEL > 0
	_STL_VERIFY(
		_Pos < static_cast<size_type>(_My_data._Mylast - _My_data._Myfirst), "vector subscript out of range");
#endif // _CONTAINER_DEBUG_LEVEL > 0

	return _My_data._Myfirst[_Pos];
}

I've personally disabled ITERATOR_DEBUG_LEVEL even in debug builds since its a real performance hog, especially with map/unordered_map (which is one of the few STL containers I still use), even for debug build levels. But yes, if he is using that setup, then it should work, but as with all undefined behaviour, its simply not a guarantee to even stay that way on MSVC, so one should at least be aware that even this assertion under those exact circumstances is not guaranteed to keep happening with future updates.

C-style array of course will never do anything useful on OOB under any circumstances, unless you offset it so far you reach into an inaccessible page. So you don't lose anything using std::array[i] over cArray[i] either.

JoeJ said:
Well, the reason you can't read the code is that it is so full of debug mode extra checks, the actual function is obfuscated.

Nah, that's not the reason why MSVC STL is hard to read, at least in my book. I've seen complex 2000 LoC and tons of debug-checks that are more readable than a single STL function. I personally attribute it to the weird and complicated camel_Snake_whateverthefuck naming, with leading _ on every variable and their mother; as well as overuse of macros. Isn't the following variant of the above operator[] 10 times more readable?

[[nodiscard]] constexpr ValueT& operator[](const size_type pos) noexcept /* strengthened */ {
	auto& myData = myPair.myVal2;
	
	if constexpr (CONTAINER_DEBUG_LEVEL > 0)
		STL_VERIFY(pos < static_cast<size_type>(myData.myLast - myData.myfirst), 
			"vector subscript out of range");

	return myData.myFirst[pos];
}

Does the same thing, uses the same variables, but way less verbose IMHO.

Advertisement

Juliean said:
I've personally disabled ITERATOR_DEBUG_LEVEL even in debug builds since its a real performance hog

um… How precisely do you do this?

I'm asking becasue i often tried this to help with slow debug builds, bu it never helped at all. Maybe i did it wrong.

JoeJ said:
I'm asking becasue i often tried this to help with slow debug builds, bu it never helped at all. Maybe i did it wrong.

Its rather simple, you put the following define:

_ITERATOR_DEBUG_LEVEL=0

Either in same header that is excluded everywhere, or insode the “Preprocessor definition” field of the C/C++ => Preprocessor page in the projects configuration. This can lead to mismatches in included libraries, so you need to ensure your definitiation matches.

Though it could also be the case that this did not affect your code specifically, because its either not the bottleneck, or any slowness due to that was masked by the magicl out-of-order CPU \o/. But you should be able to verify that it works by stepping into the code for a random STL-container access and verify it doesn't execute the check - you should be able to, despite it being hard to read :D
For me, it does work, which is sometimes a bit annoying. because I would actually want the assertions on things like [] or std::optional*. Might want to check again that maybe Level 1 is enough for those, but doesn't contain the full “iterator validation, by having a map inside your map to verify all accesses are safe” shenanigans :D

While these are ways around it, you should also be looking at why you're attempting to access something out of bounds in the first place.

Iteration access patterns don't go out of range, unless you're doing something like removing an item from the container while also iterating over the container. In this case the range doesn't change, access is always in range and there's no need for a range check.

Access patterns happening over time should generally use smart pointers to avoid holding a stale pointer. If it still exists it can be locked as a shared pointer that won't vanish beneath you, if it no longer exists the conversion fails so you know you can't access it. No extra range check as either you have a valid shared pointer or you don't, it's an expected condition for every access.

The fact that you're having the issue at all suggests a design flaw, typically not fully understanding or defining a life cycle for the objects. There should always be a clear pattern of when objects are created, when they're valid, and when they're destroyed.

Advertisement