Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Several people I know with a formal methods and distributed systems background aren't that impressed with TLA+. I'm not exactly sure why or what else they prefer (Isabel? Coq?). Anyone with a formal methods background care to comment?


The vast majority of people using languages like Coq, Isabelle, or Lean, are researchers, and those tools are designed for research (such as defining and exploring new logical systems). TLA+, on the other hand, is designed for practitioners, i.e. people who build systems for a living. That is why more papers are being written about Coq and Isabelle, but more bugs in more real-world systems are being found with TLA+. So it depends on what your job is.


Yes the people I am referring to are in the business of designing and proving new distributed algorithms and proving/disproving the correctness of existing existing well-known algorithms for which only handwavy proofs have been provided. In particular it goes beyond just model checking.


Right. TLA+ has a proof assistant, just as Coq and Isabelle do, and it has been used to good effect. But because there is also a model-checker capable of checking (a subset of) TLA+ (actually, two model checkers now), practitioners greatly prefer using that over a proof assistant. The reason is that if your goal isn't to publish a paper but to deploy a system, what you're optimising for is bugs found per hour of effort, and a model-checker has a higher ROI in that regard than deductive proofs.


People that can write proofs in Coq and Isabelle might prefer that over the TLC approach of exhaustively checking all the possible states allowed by a TLA+ spec. But writing proofs in Coq/Isabelle/Lean/HOL is a more sophisticated skill and requires even more training on the multiple theories available in these provers. The brute force approach of TLA+/TLC is more dependable as part of an engineering process. Some specs can be really hard to prove, but easy to exhaustively check.


I'd be interested in hearing their opinions! Do you know more about their contexts? Like if they're in academia, industry, what kinds of things they're doing, etc.

(I don't think it's necessarily the case that they prefer Isabelle or Coq; it really depends on what they're trying to do. I'd be especially fascinated if they prefer, say, SPIN to TLA+, which is a much closer tool.)


It's not like TLA+ is really pushing research frontiers, it's just a great language using solid established algorithms that works really well for modeling distributed & concurrent systems. Maybe they're unhappy with the proof system part of the language? That's still a research project in progress.


TLA+ is used for model checking, right? Do Isabel and Coq do model checking, or are they optimized for model checking? I haven't tracked the area for years. I thought popular model checking systems were like SPIN and nuSVM. Didn't know the industry has shifted to using a proof system like Coq.


FWIW, I have a PhD in formal methods and spent a good chunk of that PhD proving stuff about distributed systems in Isabelle. I'm vaguely familiar with Coq. At work I've been largely using TLA+ the last few months to analyze designs of distributed protocols.

What I'll say is a simplification, but I'd describe TLA+ as a one-trick pony, that does that one trick very well. Roughly, you have to structure the model of your algorithm as a transition system, described using a predicate on "these are the legal initial states", and another predicate over two states that says how you can move from one state to the next. To describe these predicates, you get a fairly simple but expressive language at your disposal (a TLA cheat sheet probably fits a single page). This format lends itself well to describing high-level ideas of distributed protocols or concurrent algorithms (roughly what you'd write as pseudocode for those protocols/algorithms). You can then use that same predicate language, plus some additional operators to talk about time (e.g., you can say things like "never" or "eventually"), to specify what your system should do. Finally, you get an automated analysis tool (actually two tools these days, TLC and Apalache), that checks whether your model satisfies the specification. The beauty is that the checking is push-button - after you've finished writing the model and the spec, your work is largely complete. The analysis will have severe limitations; your model can't be too big, and it has to be finite (e.g., if you're writing a model of a distributed protocol, you have to limit the analysis to settings with a handful of nodes), but in practice even this limited analysis weeds out most of the bugs.

Isabelle and Coq are theorem provers (there's also a bunch of other theorem provers). That means, you have to define whatever you're modeling as a mathematical object, and then you go off and prove stuff about it. They're both extremely flexible - you can model a transition system just like TLA does, but you can also talk about any kind of mathematics (algebra, statistics, financial math, whatever). You can also encode programming language semantics, as well as program logics, that allow you both model actual C or OCaml or whatever code, and to specify properties about programs (as mathematical theorems) and prove them in a more comfortable way using program logics. The analysis (which corresponds to proving a theorem) is generally not limited (e.g., you prove a distributed protocol correct for any number of nodes).

The snag is that in Isabelle or Coq the proof is done in a largely manual way (you have to sort of type in the argument by hand, often in excruciating detail). If you want to verify a program with N lines of code, you'll typically end up writing 5-20 times N lines of proof to convince the theorem prover of the correctness of the said program. But to do this, you'll generally have a large library of already proved theorems (6-7 years ago I counted around 200k theorems available for Isabelle), and you will generally have a "metaprogramming language" at your disposal to write your own analysis (i.e., proof) procedures.

For academics, especially programming languages and formal methods people, their work is often writing new programming logics, or new proving procedures, or developing whole new theories. Theorem provers lend themselves better for that kind of work, as well as pushing boundaries, such as doing code-level analysis. But for engineering, TLA can be the 80/20 solution in many cases.


Great answer thanks. Yes this makes a lot of sense. My understanding was that TLA+ also supported theorem proving but it sounds like it is more used as a model checker in practice.


TLA+ has very different aims.

A better criticism would probably be to note that TLA+ is even within the more similar world of (say) model checking, it is a LISP to (say) Spin's (or similar) C




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: