Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm arguing that we should use asserts or similar defensive coding mechanisms to ensure that preconditions are met instead of merrily continuing to run our program with cascading error effects.


I've had this argument a ton with people used to new high level languages that make null safety "too easy" like:

  next?.doThing()
Where doThing is never called if next is null.

Languages that do this use "scary" operators to crash on null:

  next!!.doThing()
And it's drilled into people's heads the latter is a Bad Thing (tm)

-

You really need to consider context in null handling.

Imagine an app that alerts a nurse when the patients heartbeat is out of range.

In an application where the UI context might have been closed out, it's common to see

  someUiContext?.showAlert()
But what actually happens if the context is gone?

It's better to crash and have part of your startup procedure be communicating that a crash occurred, and the doctor should check that something went wrong, than silently continuing.

-

The problem is when you tell people this, the kneejerk reaction is always "are you saying we intentionally add crashes"!

Because safe null handling was specifically added to avoid the situation where that crashes...

(For example, if the app was a news reader and the alert was "Article failed to load", you wouldn't want the app to crash just because the user left a certain page before the alert was shown)

But I think the pendulum has swung too far at this point, people are so used to just sweeping nulls under the rug, and it's not great for finding issues


Yes - and that is the point the OP you responded to is making. This is not a generic library function. It used in a specific setting where preconditions exist and presumably are checked _prior_ to calling this code.

Sure you could make the argument that we don't _know_ they are being checked, but it's a pointless discussion. Who cares? _If_ preconditions are met, this code is safe, if they aren't, it's not safe. Since we don't know one way or the other, there's no point in discussing further. The kernel developers know their stuff...


Imo every piece of code (for reasonable definitions of "piece") is supposed to check its own preconditions and not rely on the caller to check them.


It's a bad idea, especially in kernel/low-level/performant code. It's OK to check some values at specific points (like when a value is passed from user space), but in general it's bad practice to check it at every single function. You trust in your program flow.

Imagine if at every function down a complex stack you go with:

    if (!ptr1 || !*ptr1 || *ptr1 > 5 || param1 < 0 || param2)
        /* etc.... */
    {
        return NULL;
    }
(used arbitrary names and values).


This is nice in theory but a bad idea in practice (as a blanket statement, I am all for checking preconditions in general). An easy example is binary search, which I think is a reasonable "piece" of code by your definition. One should never check its preconditions _inside_ the binary search function (that the list of elements being searched is partitioned by the search predicate). Checking the precondition is O(n) while the algorithm is O(lg n).


You're right, but checking for null is not terribly expensive compared to iterating over a linked list.


The thing is that you aren’t really advocating for just that... Since this is pretty general support library code, the result of this philosophy is defensive checks in all support functions and that means checks every spinlock, mutex/semaphore, and god knows how many other places... everything really. We also would want these checks at multiple levels since we shouldn’t trust that other people checked things. This would have a significant effect on performance and many would prefer their system not to run however-many percent slower. All to avoid a possible issue that can’t be very large or systems would be kernel panic’ing all over the place.

From a black box, zero-knowledge, perspective it’s maybe worth remembering that code built this way successfully runs mission critical systems all over the world every day. Thoughtful people would have switched to other (slower, more pedantic) systems if doing things differently was a real overall life improvement. People have had the choice, there are plenty of OS’s out there... some far more formal/pedantic. Linux wasn’t the safe choice back in the day, it was just better by several important metrics consistently over time.

Perhaps these insanely experienced kernel devs know what they are doing.


This isn't possible in many cases - consider the simple C library function strlen, one of who's preconditions is that it must only be called on a zero-terminated string. There is no way to write code in strlen to check that this is true.


Which is one of the reasons why coding standards like MISRA forbid using strlen (at least in C++, I guess even MISRA doesn't want to force you to write safer C with sane string types).


Asserts have run-time cost. What you need is to ensure ensure those preconditions are met in the first place, statically, by formal methods for example.


Which is ideal, but generally beyond reach for now. So assertions it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: