Aggregated exceptions: Proposal summary
In a recent post, I suggested that when a destructor detects a cleanup error, throwing an exception is an acceptable way to report it. I argued that reporting problems is always preferable to ignoring them, so if a destructor does detect a problem, throwing is better than doing nothing. In the worst case, it will cause a whole program abort, which in unexpected circumstances is yet better than no reporting.
It turns out that, with C++11, the language committee took the opposite stance on the issue. Quite simply, all destructors are now declared noexcept(true) unless you go to the trouble of declaring them noexcept(false) - in which case, you are on your own, and the STL doesn't like you.
In Visual C++ 2010 and 2013, support for noexcept is not implemented, so behavior remains as in C++03. However, if you compile the following with g++ -std=gnu++11:
struct A { ~A() { throw "all good"; } };
int main()
{
try { A a; }
catch (char const* z) { std::cout << z << std::endl; }
}
... the result of running this won't be "all good"; it will be a call to std::terminate(). Furthermore, even with -Wall, GCC will not warn you about this behavior. (VS 2015 does.)
Exceptions and categories of errors
When programs encounter unexpected conditions, ways of handling them can be categorized thusly:- It may not make sense for the program to continue. The response is whole program abort. In most cases, no cleanup is necessary. Resources will be cleaned up by the operating system.
- It may not make sense for part of a program to continue, but other parts should. An example is a server that handles connections from many clients. The response is to throw an exception. Unwind the part that cannot continue, report the error, but let the rest of the program run.
- The program may have encountered an unexpected condition, but can continue. The program does not need to throw or abort; it can proceed after reporting the error.
Breaking up an application into processes, however, creates difficulties for communication between them. What would previously be a function call, now requires some kind of RPC protocol. Issues arise with security and efficiency of these communications. Unless you have infrastructure like Erlang that solves these problems for you, a tradeoff arises between simplicity and efficiency on the one hand, and resilience on the other. Most developers will choose what looks simpler and less work; so that's how you end up with monolithic applications, in which exceptions are especially useful.
Limitations of C++ exceptions
The whole problem with exceptions in destructors is that C++ exceptions are inherently not designed to handle multiple concurrent errors. This isn't by necessity; it is by choice. Language designers had the following reasonable options:- No exceptions. C chose this option.
- One exception at a time. C++ chose this option.
- Unlimited exceptions. An option that C++ can still choose.
How could C++ support unlimited exceptions?
Instead of std::exception, with its basic char const* what(), you'd have an exception class that can have child exceptions attached to it something similar to std::nested_exception. If an exception causes you to destroy a container with 1000 objects; and 10 of those destructors throw; you can catch those exceptions, attach them to the parent exception you're handling, and allow that exception to grow as you go. By the time the exception reaches its ultimate handler, it consists of one primary exception, and any number of other exceptions attached to it. That is fine, and you can report and handle them all.
But what if the exception is bad_alloc, and there's no memory for secondary exceptions?
The program should wait for memory to become available.
In 20 years, I have never seen bad_alloc on Windows unless the program requested an unreasonable amount of memory. In all of these cases, normal-sized allocations could still continue.
Windows will go to extreme lengths to avoid failing a reasonable memory request. I argue that this is what an operating system should do. If a program finds itself in a position where it cannot allocate a small amount of memory, it should spin with Sleep(), and wait for memory to become available. If the memory is being exhausted by another process or thread, it will eventually finish or be killed, and other processes can continue. If the memory is being exhausted by the same thread, then the program is in a borked state, and might as well hang, so someone can attach to it with a debugger. In this case, the program needs to be fixed.
We should not design exception handling as if the typical case is going to be a low-memory condition in which operator new fails for reasonable allocations. For reasonable use, operator new should never fail. If we're allocating memory for an exception, the allocator should succeed or wait indefinitely.
When the human economy runs out of something vital, our go-to response is to queue for it. There's no reason a program shouldn't wait if it needs something vital such as a small amount of memory to proceed with exception handling.
Can we support exception aggregation manually?
What would have been nice would be if all exceptions inherited from something like std::nested_exception; and that if an exception occurred in a destructor while another is being handled, instead of the program calling std::terminate, the new exception would be simply attached to the old one.Sadly, this functionality does not exist. There are things in C++11 that are similar at first sight, like std::current_exception and std::throw_with_nested but they can only be called from inside a catch handler, not from a destructor that's being called as the stack unwinds.
Even if you implement a custom container to gracefully handle multiple exceptions in destructors of objects it contains, this will only work if the container catches the first exception. If your container's destructor is being called during stack unwinding, you can aggregate exceptions within the container as much as you like but there's no way to attach them to the first exception that's being handled.
In short it looks like no, this is a fundamental shortcoming of the language. You can't join an exception to another in flight.
What options do we currently have?
You can throw from destructors if you declare them noexcept(false), and check std::uncaught_exception() before throwing. This doesn't allow for exception aggregation, but it allows you to report cleanup errors if another exception isn't already in flight. If an exception is already in flight, you're on your own; the language doesn't have a solution. If an object that throws this way is used in a container, issues like resource leaks may arise. A resource leak is usually still better than not reporting an error.You can also submit to intentions of language designers, and not throw from destructors ever. That leaves you with error reporting options that avoid exceptions:
- Whole program abort. You can treat cleanup failure as a critical error. The error may, or may not, in fact be critical. For example, something like CloseHandle could fail because you've called it already, or due to some kind of network connectivity error. In the former case, the error could be critical; in the latter case, your application should be resilient.
- Report and continue. You can use some kind of non-exception facility to report the error, and then continue as though nothing happened. This can be the wrong thing to do if the cleanup error is of a kind that the program should, in fact, unwind partially.
What should the language do?
A future version of C++ should support unlimited concurrent exceptions. If an exception is in flight, any additional exceptions in destructors should be aggregated to the main exception automatically.Consider that we now develop for systems where desktops have 16+, and mobiles have 2+ GiB of memory. We do not lack the resources to handle multiple exceptions gracefully.
For container implementors, some syntactic sugar would be most welcome:
class container_type {
...
~container_type() {
try aggregate { // Aggregates and re-throws exceptions in contained "try defer"
for (size_type i=0; i!=size(); ++i)
try defer { delete m_objects[i]; } // Must be within "try aggregate"
delete[] m_objects;
}
}
};
The above suggests two contextual keywords, "try aggregate" and "try defer", largely equivalent to:
class container_type {
...
~container_type() {
std::exception_ptr eptr;
for (size_type i=0; i!=size(); ++i)
try { delete m_objects[i]; }
catch (std::exception const&) {
if (!eptr)
eptr = std::current_exception();
else
eptr->aggregate_current(); // Not supported currently
}
delete[] m_objects;
if (eptr)
std::rethrow_exception(eptr); // Join to in-flight exception
} // not supported currently
};
This is already almost possible currently - the main obstacle being join to in-flight exceptions.
How would this affect catch handlers?
It would not. The only difference would be that any exception you catch can have aggregated exceptions attached to it, which you can investigate.For example, suppose your mail delivery subsystem throws Smtp::TemporaryDeliveryFailure. During stack unwinding for this exception, a destructor calls closesocket, and it returns an error.
In current C++, you have no good options for what to do with that. Most developers will ignore it, so you lose information that closesocket failed. If you don't want to ignore it, passing the error somehow from a library to a using application requires some ad hoc process, involving C-style global error handler hooks, instead of using a C++ language feature.
With the above proposal, the destructor can throw Socket::Error, and it will be aggregated to Smtp::TemporaryDeliveryFailure. Your catch handler will still catch the delivery failure, and can ignore any aggregated exceptions with same result as if the destructor that calls closesocket did not throw.
However, your catch handler can also process the aggregated exceptions. It can inspect them, decide they're innocuous, it can log them, or it can re-throw them. Standard language mechanisms can be used to relay information which is currently very hard to relay while retaining a clean architecture.
What's the impact on noexcept?
It strikes me that the main usage case of noexcept is as a kludge to compensate for the language's inability to handle multiple exceptions. It basically creates two languages in one. It stratifies all code into (1) code that throws, and (2) code that doesn't.If we add multi-exception support, it may just be that noexcept loses its main usage case. It would remain useful for domain-specific code that wants to ensure that no exceptions happen while it's executing. However, that seems like a minor niche compared to its current use as a kludge in destructors.
Showing 2 out of 2 comments, oldest first:
Comment on Jul 17, 2015 at 20:10 by pip010
besides the throwing from destructors (which is undefined)
I think might fit your needs
Comment on Jul 17, 2015 at 21:28 by Oliora
I agree that it would be nice to have a way to report all exceptions from dtors but in paractice I never fell in situation when I really need this.