They just think you have lots of dead code because of silly undefined behavior nonsense.
char *allocate_a_string_please(int n)
{
if (n + 1 < n)
return 0; // overflow
return malloc(n + 1); // space for the NUL
}
This code seems okay at first glance, it's a simple integer overflow check that makes sense to anyone who reads it. The addition will overflow when n equals INT_MAX, it's going to wrap around and the function will return NULL. Reasonable.
Unfortunately, we cannot have nice things because of optimizing compilers and the holy C standard.
The compiler "knows" that signed integer overflow is undefined. In practice, it just assumes that integer overflow cannot ever happen and uses this "fact" to "optimize" this program. Since signed integers "cannot" overflow, it "proves" that the condition always evaluates to false. This leads it to conclude that both the condition and the consequent are dead code.
Then it just deletes the safety check and introduces potential security vulnerabilities into the software.
They had to add literal compiler builtins to let people detect overflow conditions and make the compiler actually generate the code they want it to generate.
Fighting the compiler's assumptions and axioms gets annoying at some point and people eventually discover the mercy of compiler flags such as -fwrapv and -fno-strict-aliasing. Anyone doing systems programming with strict aliasing enabled is probably doing it wrong. Can't even cast pointers without the compiler screwing things up.
I would consider the use of signed integers for sizes to be wrong, but if you insist on using them in this example, just test for (n == INT_MAX). malloc itself uses size_t, which is unsigned.
I have been known to write patches converting signed integers to unsigned integers in places where signed arithmetic makes no sense.
The real problem is the fact compilers constantly screw up perfectly good code by optimizing it based on unreasonable "letter of the law" assumptions.
The issue of signed versus unsigned is tangential at best. There are good arguments in favor of the use of signed and unsigned integers. Neither type should cause compilers to screw code up beyond recognition.
The fact is signed integer overflow makes sense to programmers and works just fine on the machines people care to write code for. Nobody really cares that the C standard says signed integer overflow is undefined. That's just an excuse. If it's undefined, then simply define it.
Of course you can test for INT_MAX. The problem is you have to somehow know that you must do it that way instead of trying to observe the actual overflow. People tend to learn that sort of knowledge by being burned by optimizing compilers. I'd very much rather not have adversarial compilers instead.
Unfortunately, we cannot have nice things because of optimizing compilers and the holy C standard.
The compiler "knows" that signed integer overflow is undefined. In practice, it just assumes that integer overflow cannot ever happen and uses this "fact" to "optimize" this program. Since signed integers "cannot" overflow, it "proves" that the condition always evaluates to false. This leads it to conclude that both the condition and the consequent are dead code.
Then it just deletes the safety check and introduces potential security vulnerabilities into the software.
They had to add literal compiler builtins to let people detect overflow conditions and make the compiler actually generate the code they want it to generate.
Fighting the compiler's assumptions and axioms gets annoying at some point and people eventually discover the mercy of compiler flags such as -fwrapv and -fno-strict-aliasing. Anyone doing systems programming with strict aliasing enabled is probably doing it wrong. Can't even cast pointers without the compiler screwing things up.