To this day this piece from 1987 remains one of the most wanted pieces of wisdom:
I still remember the jolt I felt in 1958 when I first heard a friend talk about building a program, as opposed to writing one. In a flash he broadened my whole view of the software process. The metaphor shift was powerful, and accurate. Today we understand how like other building processes the construction of software is, and we freely use other elements of the metaphor, such as specifications, assembly of components, and scaffolding.
The building metaphor has outlived its usefulness. It is time to change again. If, as I believe, the conceptual structures we construct today are too complicated to be specified accurately in advance, and too complex to be built faultlessly, then we must take a radically different approach.
Let us turn nature and study complexity in living things, instead of just the dead works of man. Here we find constructs whose complexities thrill us with awe. The brain alone is intricate beyond mapping, powerful beyond imitation, rich in diversity, self-protecting, and selfrenewing. The secret is that it is grown, not built.
Cool quote. Maybe now we "grow" software, in the sense that any application consists of probably thousands of interconnected systems/libraries/frameworks that work together in a way that is organic or emergent.
I wonder if people don't take the wrong lesson from this essay. There might not be any single 10x productivity-enhancing technology, but there have been many modest improvements. I think it's fair to say that a good programmer in 2015 is probably 10x more productive than a good programmer in 1995.
Someone reading this today might be tempted to dismiss tools they just don't understand as "not a silver bullet". However, even modest improvements are worth striving for, and if you only adopt what is "industry standard", you'll always be behind your early-adopter peers. (Or at least, those of your early-adopter peers who have good enough taste to distinguish a good, useful technology from the next shiny new thing that isn't ready for production and may never be.)
Things that have been a boon to me have been getting into Linux back in the early days when it was new-ish, the arrival of Google, learning Haskell, learning to use source control (first svn, now git), and the availability of things like Stack Overflow and Wikipedia. If I had to go back to using the tools of the mid nineties (Borland Turbo C++ on Windows 3.1, NCSA Mosaic and a 14.4kbps modem, 33mhz processors, 8 megs of ram, etc...), I could still sort of get stuff done as a programmer but really we've come a long long way since then.
It sounds like you personally gained access to better tools and that programming tools and productivity enhancing technology in general have become far more accessible. But programmers had access to unix workstations, dvcs, places to ask and answer questions online, etc, in 1995 as well. "It's fair to say that a good programmer in 2015 is probably 10x more productive than a good programmer in 1995" is an extraordinary claim without any of the required extraordinary evidence.
So, this paper is actually from 1986. More fuel for your argument.
Out of the Tar Pit (http://shaffner.us/cs/papers/tarpit.pdf) is an interesting response. I'm not convinced Functional Relational Programming is a silver bullet. But I think the arguments about state are certainly compelling.
After reading Out of the Tar Pit, I got the impression that No Silver Bullet was perhaps accurate for individual programmers. But I think there are still lots of silver bullets for teams of programmers. I think a lot of your examples fall in that category. Source control, internet resources, etc. That all helps us collaborate.
I think maybe Out of the Tar Pit points at some things we can do in our code to improve our productivity as a group (by making code easier to understand and reason about.) I think some of Bret Victor's immediate feedback stuff (https://vimeo.com/36579366) and the places the Eve team are taking it (http://www.chris-granger.com/2015/08/17/version-0/) are also pointing in that direction.
Ultimately, I think a lot of the crappiness in software these days exists because the things that are very easy for a single programmer make things hard for a group. It's funny because the author of "No Silver Bullet" is also the author of "The Mythical Man Month". Maybe he doesn't focus on collaboration because the conclusion of the Mythical Man Month was "only let one person code and have everyone else support him."
I think it depends on what you're doing, and what counts as productivity. I think lines of code per developer per day haven't gone up much, but it's a lot easier to deliver much more complex products now than it was then, in part due to the large volume of readily accessible library code. There's less need to write special-purpose code for everything, because someone probably wrote the generic parts already.
There are some software domains that haven't changed much at all since the 90s or earlier. Systems programming is still done largely in C and C++, and modern kernels work basically the same way they did then with a few minor differences. Scripting has changed dramatically, web frameworks have changed dramatically, business applications have changed quite a bit. C# is noticeably different from Java, which is different from C++ or Cobol or whatever it was people used to write business software in back then.
The biggest problems in tech are not technological but the social interaction of primary creative agents: those of us responsible for making great strides in magnifying human intellect. Those of us making the software.
The biggest problems that we could be better at, at least.
One other hard problem is to figure out how to construct systems that are not so damn fragile. I am not talking about raid-5 here, but how to find ways to tackle the intrinsic complexity of our problems so that minor faults in our definitions and solutions will not cause catastrophically nonlinear crashes.
Maybe that requires hard AI, but it would be nice if when developing the equivalent of a house, the boiler would not explode and destroy the house if one forgot to connect the doorbell. If you see what I mean.
Back to the original topic: No I don't think an average developer is 10x more productive now, maybe 2x thanks to that it's so much easier to find information about everything, but the intrinsic complexity is still there,
and we still have the same brains.
Although I think that google and wikipedia are an enabler - For my hobby projects I can google and implement a really quick/smart algorithm that I probably would not find - less invent myself. However; It does not help the typical "enterprise" coder that is churning out boring code while trying to figure out ill-defined and complex rules.
I believe though that the developer variance is 10x - at least when regarding long term maintainability. (Less time wasted on fixing old bugs, easier to implement new features, etc, etc.) Unfortunately, only a few people are required to complicate things beyond repair so the larger the team, the less does it matter.
I just got reminded of my favourite observation:
Complicated problems requires complicated solutions.
It follows that if someone is a bit dense, all their problems are complicated.
Therefore, the solutions will too be complicated.
Or just Conway's Law. One of the things that Cloud Computing is imposing (with its inherently stochastic performance variation) is the notion that developers develop systems that are inherently fault tolerant. It's been a long time since I wrote programs that trusted other components to always do their job.
I remember how, as a young programmer, how hopeful I was that this technique (OO, RAD, "Components") or that language (Scheme, ML -- those two were the "ancient secret knowledge" that we "rediscovered" and our bosses had "overlooked" -- C++, Ada) will change programming forever and make software development completely different, and how angry I was at the experienced developers who told me that the most approaches will fail, and the best few would only yield perhaps significant, but evolutionary rather than revolutionary advances. Now I'm the one saying this to others...
I think that two specific advances did end up yielding significant (though evolutionary) progress since that text was written: automatic garbage collection, and automated testing practices. Both were viewed with skepticism, the latter with some derision, but they have both helped us out of a real crisis of the software industry in the nineties, when too many projects just couldn't get off the ground (or, rather, continuously crashed).
Yes, the world would be a better place if young minds all stopped dreaming and accepted the wisdom of the elders. There's nothing left to explore. </sarcasm>
I'm getting older now (40), and let me tell you one thing:
It's really saddening to look at all the wasted effort made to solve the same problems, over and over again.
Mainframes were shit (in most contexts), but have been reinvented and reinvented again and again. The browser is the new 3270 terminal, but are getting more and more capabilities, becoming the new PC. Most it-departments are enforcing rules and practices of the mainframes - the same rules that caused the PC revolution, ie you are not allowed to manage your own computer, and run your own programs.
RPC and IPC has been reinvented and reinvented again and again, but no real progress. The few things learned along the way have been lost. (Abstract syntax notation, separating message definition and encoding for one.)
So we get this new protocols that does not (or did not) use a message definition language, because parsing text is "easy."
We run all protocols on top of HTTP because it was impossible to get the firewall guys to open up the firewall.
Now the firewall guys have caught up and can close down specific http applications, so we are back at square one but with shitty protocols instead.
There is STILL no (real) language with intrinsic relational support, ie two-way-relations as a first class language feature, instead we are mucking around with (one way) arrays and maps (using monads though!), but in another cooler language.
(I happen to believe that an entity-relation model typically is a reasonable representation of the real world and the abstract concepts in it.)
Javascript and PHP. Nothing learned at all. 50 years of computer science down the drain.
Functional programming: well lisp and scheme are not exactly new, and to be honest, clojure is not that much of an improvement over scheme. Everyone should know a bit of scheme or clojure though.
The only thing that's getting better and better at quite some velocity is the hardware.
Possibly also the "agile" movement, since it's effectively about claiming back the control to the ones that know how to build stuff from the checkbox-guys and process-followers.
Boy, are you a pessimist! That's not at all the message. The message is about working towards progress rather than jumping from one fad to the next with blind faith that something will "save" us. It's a call for technological progress rather than an alchemist's semi-religious search for a way to turn lead into gold.
My first reaction after reading the paper was similar to yours. That I had fallen for the OOP hype train. Later when my boss cited it as an excuse to iterate our aging platform instead of attempting a redesign, my view of it began to change.
It's true many technologies have been overhyped, but it's far too early to throw in the towel. A little over optimistic youth fueled zeal, has let to some of our greatest advances. I think Bret Victor sums my views up best here:
Great talk. Alan Kay and others have said similar things.
We've squandered all the gains the hardware people have made on slower, more bloated software, resulting in computers whose response times are no faster - and sometimes actually slower - than their equivalents were 30 years ago.
And I don't see much youth fuelled zeal going into solving the problem, or even coming out with much in the way of innovation which might help. Instead, people are excited about "new" languages like Golang and Rust (which doesn't even have a garbage collector!), and rebranded copies of BSD and Linux.
People are incredulous when I tell them that while the trailing edge - people entering COBOL into IBM mainframes on punched cards - has advanced considerably, the leading edge has regressed. As Philip Greenspun put it: " These days, most former Lisp programmers are stuck using Unix and Microsoft programming environments and, not only do they have to put up with these inferior environments, but they're saddled with the mournful knowledge that these environments are inferior."
> Rust (which doesn't even have a garbage collector!)
I think you need to take an actual look at Rust if you think that the lack of garbage collection is somehow a design flaw. :P You're free to deride the language as "just" an improvement on the state of the art of systems programming instead of attacking the fundamental underlying problem (which is that the underlying systems themselves are tremendous clusterfucks), but not only is that outside of the domain of a programming language, it is also the case that merely ignoring the status quo out of personal dissatisfaction does not succeed in moving the world forward.
"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man." -- George Bernard Shaw
AIUI Rust grew out of a dissatisfaction with C++, and it does have a few good features, such as type inference. But since Java was released, it's been unthinkable for a high level language to require programmers to do their own memory management. If Rust is aimed solely at systems programming, to be used the way Algol 60 was used to implement MCP, PL/1 Multics, C Unix, and Oberon Oberon, then the lack of a garbage collection is understandable. But you can never guarantee your product will be used as intended: I've encountered people who think C is an acceptable choice for everything.
I'm not merely dissatisfied with the current status quo: I'm actively doing something about it. IMO we're doing practically everything wrong, from hardware upwards. Fixing it requires systems very different from those in widespread use today.
As you say, Rust is aimed at being a C++ replacement mainly for systems programming, and having a garbage collector is a hinderance there. Sure people will use it for not-strictly-systems-programming things, but that's on their head: they evaluate the trade-offs (e.g. if they can do without a GC) and choose Rust. Compromising the core goal (memory safety without garbage collection) by including a pervasive GC just because some people might make a slightly silly choice and use Rust where some other tools is better is just wrong. It would make Rust inappropriate for the places where there is essentially no other choice like it, putting in into a class with many other alternatives (e.g. Haskell, OCaml, D etc.).
In any case, on one hand you complain about slow, bloated software, and on the other about Rust not having a GC. The unpredictability of a GC being a major component of bloat in a lot of software. Don't get me wrong: it is possible to write sleek software in a managed languages with GCs, but for many tasks this generally requires fighting against the GC, with things like object pools and buffers (basically reimplementing the standard techniques from non-GC'd languages), and, of course, it is definitely possible to write bloated software without one.
Rust is designed to make it easier to write code without a GC, by adopting many of the advantages that garbage collectors/managed languages bring to table (and more, e.g. static protection against iterator invalidation). There's been a lot of people from Ruby, Python, JavaScript (etc.) backgrounds adding lower-level/more-control programming to their toolbox via Rust, something that was too daunting previously. This means that they can write software (or at least, sensitive parts of their code) that doesn't suffer from the overhead/bloat of the managed languages.
Lastly, Rust's memory management is nothing like "manual memory management" in C (or historical C++), I mean, it compiles down to be essentially the same thing at runtime, but what the programmer writes is very different. The combination of lifetimes, destructors and generics mean that it is difficult to do it wrong: memory leaks are rare, and the compiler will tell you about any dangling pointers, etc.
Given that you list "type inference" as the headline feature of Rust, I continue to think that you haven't actually taken a look at the language in the slightest. :P Guaranteed memory safety with neither the bloat of a garbage collector nor managed runtime is nothing to sneeze at, and is something that no other language (no, not Ada, or Oberon, or...) has ever done on an industrial scale (and only Cyclone has attempted it on a non-industrial scale).
I think your boss' reading of No Silver Bullet was mistaken. He probably misread the paper as something that supported his own conservative preconceptions, and probably disregarded everything else the paper says, like, you know, "grow your designers, they are as important as managers", which includes accepting when one of them says "you should consider using this new tools, it will help us because of X, Y and Z".
One of the problems is that many people actually prefer complexity over simplicity for a variety of reasons: they wrote it, and it will be simplified over their dead bodies; or it needs all these features to satisfy all their lusers; or they want to show off how smart they are; or it's harder to reverse-engineer, ...
Everyone agrees with simplicity when stated in such general terms, but in practice, simplicity where?
Note that simplicity in one piece of software often leads to complexity in another. For example, a language with a minimal set of instructions may be considered "simple", but building software with it can be complex and unwieldy. A minimalist API may be elegant and concise, but lead to headaches when attempting to use it. Etc, etc.
I don't think there there is a good objective definition for simplicity, since a large component of what I'm getting at is subjective and context-dependent, and therefore involves "appropriateness". API design is different than app building is different than container building, etc. This is why good developers are so important.
Agreed. That was my point, that simplicity is subjective, so while people seem to agree that simplicity is desirable, they often disagree about where to simplify (which often comes with trade-offs).
It's less of a bullet and more of a main battle tank - without proper armament it won't defeat the target, but it does protect you and lets you drive through many obstacles like they weren't even there.
I believe Brooks will eventually be proven wrong about AI and Automatic Programming. Granted, he has not been proven wrong yet.
It is interesting, perhaps even a little surprising, that reasoning about programs has proven to be one of the most intractable challenges for AI. Why, to take a simple-sounding example, when my program does something wrong, can I not simply ask the machine why it did it? Why do I have to go in with a debugger and find the problem myself? This wouldn't require solutions to any of the familiar bugaboos of AI: it doesn't need a massive database of facts about the world, it doesn't require visual image recognition, it doesn't require natural language understanding, it doesn't require robotics or simulated emotions or any of those things. It just requires logical reasoning. Can't computers do that?
Well, no, it turns out, they can't do it, not in its full generality. Despite all the things it doesn't require, general logical reasoning is still almost AI-complete, which is a cute way of saying we still don't know how to make machines do it.
And fully general reasoning is required to solve the problem. We have lots of static analysis algorithms that can answer specific kinds of questions about programs, but we're still not very close to being able to answer an arbitrary question, even fairly simple ones. The spaces we would have to search are just too big; the branching factors are too high.
I think there is hope on the horizon, though. The so-called automated reasoning systems that have been built over the last few decades -- also called theorem provers -- have mostly not made use of any machine learning techniques. These two technologies are starting to be combined, and machine learning is of course a very hot area right now. I believe that eventually we will have Automatic Programming worthy of the name as a result of this combination.
I couldn't disagree with you more. This paper offers no insight, only excuses to accept that software will always be buggy. If anything I'd encourage young engineers to explore technologies that have been not been explored to their fullest potential. Computer Science is a very young field and many unexplored paths such as flow based or some automated programming method that hasn't been dreamed up yet that may lead to far more reliable software.
I dread to think what today's CPUs would be like if an electrical engineering version of this was required reading for every EE in the 70s. We should just accept buggy CPUs because of the complexity in designing CPUS with 7 billion transistors.
> This paper offers no insight, only excuses to accept that software will always be buggy.
The paper does not address the problem of buggy software, though it is related. It addresses the problem that software is complex.
And the insight it offers is that not all complexity is equal: there is accidental complexity (programming is hard because the way we have done things historically is not yet optimized) and inherent complexity (programming is hard because we are modeling complex stuff and it is inherently hard for human minds to juggle all the relevant details).
The excuses you complain about are really examples where people confuse one with the other. i.e. you cannot solve the problem of Requirement Gathering with better compilers because the problem is not technical, it is that stake holders are playing political games in the background, so if left to their own devices they'll give you vague, confusing requirements now with the tacit intention of leveraging those when the inevitable struggles come in the future.
> I dread to think what today's CPUs would be like if an electrical engineering version of this was required reading for every EE in the 70s.
I am sorry to inform you that an EE version of this will be direly needed in the near future. As the Moore law keeps hitting more and more fundamental limits of physics, we need engineers who are able to work in the problems that matter and make hardware that gives us the best possible performance given the constrains available. It would not do to have each generation of young EEs to start yet another cargo cult every 5 years or so, wasting valuable resources in the attempt to pack twice as much transistors per wafer.
> This paper offers no insight, only excuses to accept that software will always be buggy.
Is that how you read it? I read it as a call to accept an open eyed view of the industry and its advances, and tirelessly work to make progress rather than believe in messianic fads and jump from one to the next. It is optimistic yet realistic, and warns against unconstructive zeal.
I still remember the jolt I felt in 1958 when I first heard a friend talk about building a program, as opposed to writing one. In a flash he broadened my whole view of the software process. The metaphor shift was powerful, and accurate. Today we understand how like other building processes the construction of software is, and we freely use other elements of the metaphor, such as specifications, assembly of components, and scaffolding.
The building metaphor has outlived its usefulness. It is time to change again. If, as I believe, the conceptual structures we construct today are too complicated to be specified accurately in advance, and too complex to be built faultlessly, then we must take a radically different approach. Let us turn nature and study complexity in living things, instead of just the dead works of man. Here we find constructs whose complexities thrill us with awe. The brain alone is intricate beyond mapping, powerful beyond imitation, rich in diversity, self-protecting, and selfrenewing. The secret is that it is grown, not built.