Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Let's take a CAD program. Which aspects of it would you formally verify?

Any large program will contain some smaller components with relatively well-defined behavior. CAD is not my specialty, so I can't really comment on what algorithms are used in that domain. Forgetting about fancy algorithms for a moment, just having a more expressive type system will allow you to express invariants in your code like the fact that array indices are within the relevant bounds, that you never try to pop an empty stack, etc.—everyday programming issues.

For a more concrete example, lately I've been using Coq to formally verify critical properties about a certain type of graph-like data structure I'm using in a system I'm building.

> If you are going for the easy parts, those can already be dealt with nicely with static typing and testing, essentially push-button automated verification.

Most engineers are already writing tests and using static types. Yet, we still have buggy programs.

And just to be clear, the kind of formal verification we're talking about is based on static typing. It's just a more expressive type system than what most programmers are used to.

> If you are going for the interesting parts, you will be doing math, essentially.

You are doing some form of math, but not the kind of cutting edge math that mathematicians do—which was my original point. You are not going to run into the kinds of tricky problems that mathematicians run into with theorem proving software, like universes being too small etc. Most data in software engineering is finite and reasoning about it involves little more than arithmetic and induction (which is just out of reach for mainstream type systems, but not for the kind of type systems used in proof assistants).



First, theorem proving is NOT the same as an advanced form of static typing. This is a misunderstanding mostly pushed by computer scientists. Instead of propositions as types, I advocate a more practical form of types, based on Abstraction Logic [0, 1].

Second, yes of course, you can carve out components and concentrate on those. If you can find opportunities for this, great! You will still have buggy programs in which you use those components, to copy your argument.

Third, data may be finite, but reasoning about it is often done better in an infinitary context. After all, x^2 + x - 3 is also a finite expression, and much easier to understand than most software. So what? You will find a lot of interesting mathematics done with polynomials, some of it cutting-edge. Saying your software doesn't need cutting-edge math is just limiting yourself and your software. Chances are you will be doing some new (=cutting-edge) math if you try to verify new things. And yes, I run into problems with universes all the time actually, because this is relevant for modular formalisations. It's best to just have a single mathematical universe!

[0] https://obua.com/publications/philosophy-of-abstraction-logi...

[1] https://obua.com/publications/practical-types/1/


> First, theorem proving is NOT the same as an advanced form of static typing.

This Hacker News post is about a theorem prover based on dependent types. That's the context for our discussion.

> You will still have buggy programs in which you use those components

No one is disagreeing with this claim. But eliminating some bugs is better than nothing, even if you don't eliminate all bugs. You and the other commenters repeating this strawman are doing a lot of harm to people trying to socialize their research.

> Chances are you will be doing some new (=cutting-edge) math if you try to verify new things.

Citation needed. Most software is not doing anything interesting at all from a mathematical perspective, just shuffling data around. But either way the point is moot—Martin-Löf type theory (which is what this "magmide" thing seems to be based on) can do arbitrarily fancy math if needed (which is rarely).

I've been verifying bits of software for about 10 years, and I've never needed to invent new math to do it (though I would be happy if I ever did!).


Well, if you created a new data structure not known before, and proved theorems about it, that's new math. If you copied a well-known data structure, and prove theorems about it, that's not new math.

What do you think mathematicians do? They just examine certain things rigorously and with utmost scrutiny. These things are simpler than things appearing in real-life. Software interfaces with real-life, so cutting-edge math is really a subset of what's needed for software. That is obvious to me, but I don't have a citation. You can cite me, if you want to.

Finally, dependent types as it is done today in Coq and Lean etc. is not nice enough a logic to attract mathematicians. The reason for that is that it is not nice enough a logic, full stop. So why would it be nice enough for computer scientists? Oh, because your problems are simpler, so you don't need a nice logic?

Saying that Martin-Löf type theory can do arbitrarily fancy math is both false and right. Just as saying that anything can be programmed in assembler is both false and right. Yeah, with enough focus and discipline you probably can, but who would want to?


> The reason for that is that it is not nice enough a logic, full stop. So why would it be nice enough for computer scientists?

Type theory has many attractive properties over traditional foundations like set theory. See, for example: https://golem.ph.utexas.edu/category/2013/01/from_set_theory...


Citing your link:

> At this point, however, you may be feeling that type theory sounds very complicated. Lots of different ways to form types, each with their own rules for forming elements? Where is the simplicity and intuitiveness that we expect of a foundational theory?

It's just not there, Mike. Type theory is neither simple nor intuitive, and it doesn't make a good foundation.

Type theory certainly has advantages, but it gets lost in esoteric texts like Mike's. There are two simple advantages over set theory within first-order logic: general variable binding, and that things are kept separate, for example that a boolean is not a number. Now, how to do general variable binding without static types I show in [0]. How to still keep things separate, in [1]. All without having to divide the mathematical universe into static types.


Professional mathematicians are today using dependent types in LEAN to prove leading edge mathematics. Google LEAN and mathlib.


I am aware of Lean and mathlib. It's the group around Kevin Buzzard which is doing mathlib, and it is for sure a great effort. Being practical, they latched onto the theorem proving facilities that are currently available, Lean certainly being one of the best at this moment. But just because they can make type theory work for them, with much effort, doesn't mean that it is the right way to do formal math. They are still a drop in the ocean compared to all the mathematicians who have never touched a proof assistant. I find it telling that in the Liquid Tensor experiment Peter Scholze didn't touch Lean himself, but worked through this group as intermediaries, who mediated the type theory details for him.

The state of proof assistants is such that currently almost all of them are based on some form of type theory. There are exceptions like Mizar and Metamath (Zero), which are based on first-order logic. Type theory seemed to be the only way to have proper general variable binding, via lambdas, and that is the main reason for its popularity today. Type theory is a particular mathematical theory, though, and not particularly well suited to most things mathematicians like to do, like forming completions or subsets of types/sets. Type theory also cannot deal with undefinedness properly. Of course, there are methods of working around these issues like coercions and option types, but they are cumbersome.

Until recently I also thought that a minimum of type theory is necessary, to get general variable binding. For example, I tried to embed set theory within simply-typed higher-order logic ([2]).

But since last year I know that you don't need type theory for variable binding (see [0])! Of course, (dependent) types are still useful, but now you can work with them in a much more flexible way, without having to divide the mathematical universe into separate STATIC and A PRIORI chunks labelled by types (see [1], but that needs to be updated with the understanding gained from [0]).

If type theory works for you, fine. But I know there is a better way.

[2] https://link.springer.com/chapter/10.1007/978-3-319-20615-8_...


You might also want to check out nuPRL. It uses a completely different approach (computational types). It might be a better match for what you are talking about.


I've heard of nuPRL, it seems to be also based on type theory, with special emphasis on constructivity. It's basically the same as Coq, foundationally. At a summer school Bob Constable once said that he would refuse to fly in a plane with software which had been verified formally using classical logic, instead of constructive logic... Well, I wouldn't mind an intuitionistic verification, but I'd definitely take even "just" a classical one.


No it is actually quite different. nuPRL starts with an untyped programming language and you then prove that an untyped expression has a certain behaviour. The behaviour is called a Type but it is fundamentally a very different idea from Martin Loff type theory (IMHO). They do say that it is MLTT, and in principle they are right, but MLTT is as powerful as set theory so that is true of anything mathematically. LEAN for example supports non-constructive mathematics. But it is still based on a type theory. Anyways … YouTube has a great talk about the ideas: https://www.youtube.com/watch?v=LE0SSLizYUI


Thanks for a great reply. I would like to check out your references. However you mention more than one but I can only see [2]?


No worries, happy to talk about this stuff all day long! The links are in a higher up post, they are:

[0] https://obua.com/publications/philosophy-of-abstraction-logi...

[1] https://obua.com/publications/practical-types/1/


Same with me. It is really interesting stuff!


It literally is. Martin Luff Type Theory is the mathematical foundation used by most proof assistants today. The type checker is the prover. If your code type checks then you have proven a theory. I highly recommend learning more about the Curry Howard correspondence and Dependent Types. It might just blow your mind.


See my other answers. Curry-Howard is interesting, but making it the foundation of theorem proving is a choice (in my opinion, not a very good one), not a necessity, as most Curry-Howard fans seem to think.


> First, theorem proving is NOT the same as an advanced form of static typing.

Yes it very much is in a dependent type theory.


Yes, and because of that limitation dependent type theory is an inferior logic for theorem proving.


That's silly. Whether it's a good or bad metatheory for theorem proving depends entirely on one's goals and preferences. Most mathematicians use a type theory over something else, so it's hard to even take your view seriously.

On top of that, you seem very opinionated for someone who was just confused about the topic of this thread. I have no idea why you're waging a holy war over this but maybe take a break.


Most mathematicians don't even use formal logic. For sure they don't use type theory! You seem to be the one who is silly/confused here. If you want to lift your confusion, read the [0] link I gave above.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: