Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Whether Cryptocat programmers suck or not, I've seen worse errors from better programmers. I'm not sure what's the metric to follow here when it comes to correlating these two items.


You are dangerously fooling yourself by minimizing the importance of those bugs. Cryptography software is not like regular software. It is critical software, like the kind used to run planes or nuclear power plants: People's lives depends on it.

People with no programming experience should be literally banned by law from writing critical software. You should take those bugs way more seriously.

PS: I have seen a programmer's face fear when asked to write crypto software. They know enough to be shit scared. You need that kind of people.


I help write microcontroller code for pressure equipment management. If something goes sufficiently wrong in heating/fails to properly vent, an explosion can occur, endangering everyone in the area.

Even unrelated code is heavily audited to make sure that it can't somehow impact the main control loop and cause an invalid state.

Cryptography software should be much the same.


With normal software you can load up on the unit and integration tests to make yourself more confident with your software. When the concern is with the integrity of a cryptographic system, things are not quite so simple. You can write tests, sure, but your overall confidence afterwards is going to be much different.


We go beyond unit tests to verify that the algorithms can't create certain states by any execution path, etc.

Formal verification of software properties is an interesting field.


A blog post about the process would be fascinating, and probably something that many on HN would be interested in.


The problem is no one would volunteer to write crypto code under those constraints.


I know people who write crypto software that must conform to formal verification of the algorithm, requires detailed design documentation before a single line of code, etc.


Funny thing is that code is then compiled with a compiler was not formally verified, so it's still a 'fingers crossed' situation.


The code generator we use (at my job; not crypto) was written in Coq and formally verified to generate code with certain properties, given properties of the input.

I assume that the crypto group I know, who developed most of the code generator I use, takes similar measures to make sure that their verified "theorems" translate correctly to code.

The unverified stage is actually the hardware, which is an open problem.


I think in many regulated industries Coq itself would also need to be certified.

What industry is this?


Pressure equipment, specifically sort of mid-scale items for laboratory and small-batch use.

Our low level code (ie, directly controlling machines) is written in the SPARK environment. This code tends not to get updated often, and has a high level of verification to it. It's what actually handles the pressure cut-offs (ie, hard limits on the machine), turning vales, etc.

Our middleware code is based on a specially developed VM that gets code generated for it in Coq, to ensure that it doesn't choke or become unresponsive. However, it's not directly responsible for safety control and has somewhat laxer restrictions.

Coq is useful for demonstrating that the middleware analytics will complete in a given time profile and not crash out the server on erroneous input.


Right, but it's much less of a leap of faith, IMHO. Besides, if you turn off optimisations you can be reasonably sure the compiler didn't do something unintended.


I meant volunteer to mean "not get paid".


> People with no programming experience should be literally banned by law from writing critical software.

Getting the government involved in who gets to write crypto software...

What could possibly go wrong?


I don't think he meant to "involve the government." I do agree with his sentiment though, an inexperienced programmer should not be allowed to write critical software.

This is, of course, very difficult to implement in practice.


I don't know much about writing "critical software" - because it's not something I've ever done.

However, my guess would be that with proper processes, you should be able to let a junior programmer write the code, because bugs and errors and mistakes will come out in the wash. It's not like experienced programmers don't make mistakes! Perhaps it simple doesn't make sense to turn a junior guy loose, but my thinking is that where people's lives are at stake, depending on someone being a 'good coder' is a bad idea.


I agree. I think the problem isn't how the code is implemented; I think it's how it's designed and verified.

Most of the time, generalist developers can severely (sometimes even completely) mitigate the expense of competent design and verification by adopting trusted components and adapting the application requirements to those components (instead of the other way around, which is the usual way developers incorporate third party components).

If you don't do that, though, you're looking at the 10x-1x-10x problem: however much time it takes you to build your system, you're looking at a 1:20 ratio of effort for non-implementation work, and that's serialized.

(Lest anyone thing I'm talking my own book here: we do what I think is an atypically good job at handling the ancillary crypto stuff that comes up in normal applications, but I don't think we're well qualified to do formal reviews of cryptosystems.)


Cryptography software is usually critical but I think unlike the code for planes and nuclear power plants it doesn't necessarily have to be. Typically cryptography software is advertised as secure against an adversary has unlimited resources. However, the designers of the software can choose whatever threat model they want as long as they make that clear to their users.

I am thinking of a disclaimer like "We believe our product is secure enough that the minimum cost for to an adversary to decrypt a message is at least $ 10^k" for some k and the cost is an estimate of the total cost of factors like number of hours of cryptanalysis, number of cycles, etc.

This way, if the code is found to be completely broken at some point then the error is considered relative to the level of security the designers intended.


I agree this is a much saner way to think about it.


I may have bombed an interview a few weeks ago when they asked me to design a security protocol on the fly. I really didn't want to answer that question (because anything I'd say would be vulnerable in some way) and ended up rambling on about certificates for a while.

I guess next time I'll be ready to say "that's not something you do during an interview."


I don't think the problem has much to do with how good a programmer you are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: