MS was stuck with a poor exception implementation but what I never see explain is how they got there in the first place. It seems crazy to incur a runtime overhead to support something that is hardly ever used. The happy path is where you need the performance most, and exceptions are by definition the "unhappy path".
Michael and I talked about exception design a lot in g++ (I certainly had experience of what to do and not do from CLOS) and there was never a consideration of doing anything that might impact runtime performance.
At least now I finally understand why some people, especially in the game industry, want to disable exceptions.
As a side point, I'm disappointed by Raymond's (18 yo) claim that "Zero cost exceptions" aren't really zero cost. Everything called "zero cost" in C++ means "doesn't add any runtime cost if you don't use it". He argued against a straw man.
Table-based exception-handling metadata requires function prologue/epilogue sequences to have constrained forms that can be described by that metadata. When NT was first designed for x86-32, there was a lot of existing asm code that people wanted to easily port over to NT, and that ported code included pretty much every weird function entry/exit sequence you could think of (and lots more that you wouldn’t imagine anyone would ever think of). Switching to table-based metadata would have required modifying all that code, making porting more difficult. At least, I assume that’s the reasoning – I was deeply involved in the SEH runtime code in the ’90s and ’00s when I was on the VC++ compiler team.
The latter does exception unwinding in the kernel on runtime faults (segv, division by zero, etc). It does _not_ unwind destructors or deallocate storage. And yes, I know about this because I had to debug it, on WinCE ARM, where we eventually discovered that there were two different MS compilers, one of which generated code that could be unwound and one of which didn't.
I got bitten porting code to Windows that use setjmp/longjmp not only for exception handling (more correctly: non-local control transfer -- the exception was actually handled by then), but also for implementing a thread library.
The code worked literally everywhere else, but it turned out that on Windows longjmp doesn't just load the registers from the buffer but also does some SEH bollocks. If I recall (and this was in 2006), it didn't matter for the exception handling, but it was a hard crash for switching threads.
So I had to write my own setjmp/longjmp for Windows in assembly language (i.e copy theirs and cut out the SEH bollocks).
Thanks. A legacy of lots of assembly code, though surprising that stuff like that survived in libraries (in applications it probably wouldn't have mattered)
C++ exception model is complex and the implementations are intricate. I do not think you one can easily dismiss the possibility of runtime overhead just because it seems like no extra work is done. It's much more subtle that that. The implementation potentially affects inclining opportunities, compiler complexity, code caching etc.
On modern CPUs, probably the most efficient exception model is one that simply uses error codes. Especially when paired with an optimised calling convention like what Swift does. Checking for an error flag is essentially free on a superscalar CPU anyway and all this stuff is transparent to the compiler, simplifying the translation process and enabling more optimisation opportunities.
P.S. I ran a bunch of tests with C++ a while ago and a Result-like type error handling (implemented sanely) was always as fast as C++ exceptions for the good path and faster for the bad path — unless you are going a hundred or so nested functions deep (but then you have a massive code smell problem anyway). With an optimised calling convention it is likely to be even better.
Re the zero cost claim, he specifically clarifies that zero cost exceptions have a cost even if you do not use them, which is paid by inhibiting some optimizations.
IANACW, but I think that in principle is almost always possible to implement truly zero cost exception path in the non-taken path, by moving all necessary compensation code to undo optimizations into the exceptional path, but in practice it might be too hard for compilers to do it.
If you spend a lot of time scrutinizing g++ output (which thankfully I haven't had to do in over 20 years) you'll see that the impact is negligible and that was true when Raymond made that statement. As you point out there's a lot the compiler can do to make the common path really fast.
But OK, it could potentially be nonzero. Let's say epsilon cost instead.
I concur with this. Java JITs do a ton of optimizations and all of the effects of exceptions are minimal, second-order things, like keeping something alive a little longer (for deoptimization/debugging). If the compiler has a good notion of hot/cold paths, then a lot of optimizations that you'd be tempted to think up for exceptions fall out naturally from the more general notion, including spilling the likely-never-used-but-alive-for-deopt values onto the stack in the important paths.
Michael and I talked about exception design a lot in g++ (I certainly had experience of what to do and not do from CLOS) and there was never a consideration of doing anything that might impact runtime performance.
At least now I finally understand why some people, especially in the game industry, want to disable exceptions.
As a side point, I'm disappointed by Raymond's (18 yo) claim that "Zero cost exceptions" aren't really zero cost. Everything called "zero cost" in C++ means "doesn't add any runtime cost if you don't use it". He argued against a straw man.