> Software can still be provably correct and have security holes resulting from an insecure definition of "correct." Formal verification does not solve security.
Nirvana fallacy. The point is that it can be much better and eliminate ALL non-design bugs.
> Please explain how a company could profit more if formal verification does not bring more revenue than it costs? You seem to be assuming that revenue will appear that is greater than the costs. Where is this revenue coming from, exactly?
The revenue could come from savings on fixing bugs, paying for ransomed assets and all other costs that come from bugs. You're just assuming that doesn't add up and that there's no other reason that we don't do formal verification. That's just stupid. Show me the studies. Your claim is just as strong as the claim you think I'm making, but you missed my point entirely.
> Nirvana fallacy. The point is that it can be much better and eliminate ALL non-design bugs.
Serious question: do you really believe that?
> That's just stupid. Show me the studies.
There is a kind of HAL-9000 quality to many of these arguments. Formal verification is perfect by definition. The fact that it hasn't had very much impact in the real world is all the more evidence of the world being full of wicked people.
I mean it's a fact, so yes. You can prove programs are correct. The only possible flaw they can have is the specification is wrong.
> The fact that it hasn't had very much impact in the real world is all the more evidence of the world being full of wicked people.
That's very much not what I said. There may be many reasons. Assuming some conclusion without actual research is braindead. Cost-benefit analysis is not the only reason things do/don't happen in businesses and we have no idea whether that's the reason here. It's an empirical question that requires actual research, not a priori jacking off.
> You can prove programs are correct. The only possible flaw they can have is the specification is wrong.
So specifications are kind of like programs?
Have you heard of logical positivism?
> That's very much not what I said. There may be many reasons.
My point was that it always seems to be some external factor. That strikes me as being very convenient.
> Assuming some conclusion without actual research is braindead.
I didn't think I assumed anything. Like anybody else, I have many things that I need to assess in my day to day life, and often deal with considerable uncertainty.
Well, seL4 has verifications of it's mixed-criticality hard-real-time guarantees (sufficiently tight bounds on scheduling latency (and such) to be useful) and data diode functionalities, and it's isolation properties have been verified not just at a fine-grained specification level but at a high-level human-readable level of invariant description. It doesn't cover timing and maybe some kinds of similar, other, side channels, but it's still extremely useful.
Formal verification shines in two situations: complicated optimized algorithms with a naive reference implementation you want to confirm equivalent, and high-level behavioral invariants of complex systems (like seL4's capability system, or a cluster database's consistency during literally all possible failover scenarios).
> with a naive reference implementation you want to confirm equivalent
I'm guessing you know this but in type theories like Agda you can just specify that the input and output to an algorithm has the desired properties, rather than needing to specify any reference algorithms. For example, you can just state that an implementation takes a list of X and that it outputs a sorted list of X. Nothing more is necessary in cases like that in such a system, no code, just the single type.
Well, yes, that falls under the second case: behavioral invariants of complex systems.
And the reference for "sorting" could likely be a deterministic bogosort and certainly a primitive bubblesort.
Even if you're just looking at sorting stability, you're past what your simple "sorted" type would cover.
Most things are far less trivial than "sorted list", including (almost?) all interesting practical applications of formal verification in the life of a "normal" software engineer.
It's a tradeoff though - you could spend the time looking for bugs between the design and implementation, or you could get the implementation out sooner and get feedback and iterations on the design
Nirvana fallacy. The point is that it can be much better and eliminate ALL non-design bugs.
> Please explain how a company could profit more if formal verification does not bring more revenue than it costs? You seem to be assuming that revenue will appear that is greater than the costs. Where is this revenue coming from, exactly?
The revenue could come from savings on fixing bugs, paying for ransomed assets and all other costs that come from bugs. You're just assuming that doesn't add up and that there's no other reason that we don't do formal verification. That's just stupid. Show me the studies. Your claim is just as strong as the claim you think I'm making, but you missed my point entirely.