> I once reviewed code for a company that got purchased. I took less than a few hours to reach conclusion. There were 7 developers and in 2 months only one was writing code. The entire team but that one developer was fired. Twitter has grown a reputation for being slow and a rest and vest haven. Won't be surprised if they are looking for people not writing code.
I ran across some developers (in multiple orgs over the years) that would produce large amount of almost purposefully unmaintainable code. Yes, they were “productive”, and no, the stuff they produced made no difference and was a waste (both features wise and code wise since). 100% of these codebases turned out to be unsalvageable and were rewritten. It just would usually take orgs many months, usually after such developers leave to realize the complete and utter waste they left.
Not saying there are no slackers, it’s just productive devs are not necessarily those that produce the most of code.
“Almost every software development organization has at least one developer who takes tactical programming to the extreme: a tactical tornado. The tactical tornado is a prolific programmer who pumps out code far faster than others but works in a totally tactical fashion. When it comes to implementing a quick feature, nobody gets it done faster than the tactical tornado. In some organizations, management treats tactical tornadoes as heroes. However, tactical tornadoes leave behind a wake of destruction. They are rarely considered heroes by the engineers who must work with their code in the future. Typically, other engineers must clean up the messes left behind by the tactical tornado, which makes it appear that those engineers (who are the real heroes) are making slower progress than the tactical tornado.”
― John Ousterhout, A Philosophy of Software Design
I remember reading somewhere people can be broken down into 3 types. It applies in programming as well as in a restaurant kitchen or your family's garage cleanup.
*I'm probably remembering everything wrong but it was something like:
Cowboy: move fast and break things style. Sorta what you mentioned above.
Duct tape: most people are a form of this. Work on something just long enough to get it working but it's not beautiful and not prepared for the future or edge cases.
Professor: these get very little done because of the amount of planning put in and tend to over think most things. But almost never have to come back to a job since it's done properly and to completion.
Any team without a blend or with too many cowboys or professors is going to have a tough time.
That's why I refuse to touch code made by these tactical tornado. It's simple, you'll get all the blame, and none of the recognition. Anyone who is in that situation: Don't it even if it means you'll get fired. You'll get fired anyway.
The only exception is, if management come to understand what went wrong there and how it should be done. The chances of that are very slim, however.
Peer reviews catch tactical changes that are strategically poor. There are times when it won't get sign-off from reviews. In other times, when it gets merged, at least it is documented that the solution is tactical, people know it, and it was rationalized by the team. A plan may even be put in place to revisit the subject and do a proper job; there could be concrete ticket for that.
Reviews also slow things down. You can't move quite as fast and break as many things if everything goes through a review pipeline.
Test-oriented development helps. Sometimes people find it easier to develop a tactical solution that somehow gets certain tests working and then refactor for strategy. They don't have to feel they have wasted time on the tactical solution because they get to reuse parts of it, and use it as a jig to guide the improved solution.
Ron Garret's space debugging story from NASA has an element of this. They had a custom language in Lisp which made certain guarantees, like deadlocks being impossible. But some coder went around it, possibly due to a tactical reason, using some lower level code outside of that paradigm.
Sometimes people aren't being tactical; they really don't understand the system fully to know that some obvious solution will hit a snag.
This lines up with my experience. The person making constant "tactical" changes looks super productive. But they just deferred the productivity cost. You end up continuing to deal with their fragile mess for the rest of the lifetime of the product.
I fully understand that there are circumstances where true tactical changes do make sense, but the tradeoffs should be considered up front.
The way to handle this, I believe, is to make sure that the bug reports resulting from the "tornado"'s work end up back in their own lap. Don't let someone else fix them, especially not someone on another team. It has to be done non-antagonistically, of course. But it's the only way to make it clear to both management and (more importantly) the dev themself that there is a tradeoff for speed.
LOC produced is a good example of what I call "negative" metric: high value does not mean much, but low value is a good indication that something is up.
As a manager, I won't care that dev A produces 2k vs dev B produced 4k. But if I see dev C would produced only 30 loc over say a quarter, something is most likely off.
At some point, the curve definitely bends negative. I used to work on a team adjacent to one of the most "productive" programmers at G by that metric.
My team, and at least 4 other surrounding teams, had one full-time SWE cleaning up after him. He would go around making changes assuming things that he thought were safe and breaking tests, then his manager would argue with you that because you didn't make a promise that what he did wasn't safe, you have to fix the test breakage.
Yes, there are large negative externalities to a certain type of engineers who write a lot of code of dubious quality. Even after ignoring all the trivial cases of "artificial code stuffing", etc.
I used to work in a very dysfunctional org where the main "architect" was writing lots of broken code that kinda works, and the 20+ people in the team around it would basically be full time in fire fighting mode. The architect was a very smart guy but ironically enough without any sense of architecture: his level of abstraction for network was pushing bytes through a pipe, and for loops for calculations.
That was similar to this guy - he was a good IC who ended up being over-promoted to an "architecture" role (L7). He didn't really know how to architect things, so he went for creating externalities while resurrecting old, dead projects that past people had designed for him.
In your career, how often have you seen good software engineers whose main contribution was deleting code ?
Please take my argument in good faith: I am not looking at evaluating people based on their added LOC. The context is at orgs where I have reason to believe some people are slacking off, and looking for people who do next to nothing.
That's much more common than people who magically make everyone more effective by only deleting code.
I've seen a few projects across different organizations where an old dev was bad at copying and pasting code and ignored DRY principles. The projects had almost no refactoring, and the primary goal of a new dev was cleaning up the redundancy to better map things out for better organization of the codebase.
I don't think anyone (sensible) thinks 'if someone only writes 30 lines of code per quarter, they should be immediately fired'. I think the point is more that if you have someone who's only written 30 lines of code, it's worth taking a look to see what they've been doing instead.
For sure there may be a thousand good reasons for it, but as a quick heuristic for 'who is worth having a quick check to see if we're getting the value out of them that we're paying them for?', I don't think it's irrational.
No, this ratio is unrealistic. Either you're making numbers up rather than describing a real situation, or yes you're far too slow for me to want to hire you.
There was a funny story I don't remember where... A manager was doing LOC as a metric, and they were required to count it. But an engineer refactored and put -1000. That was the last time they asked for it.
I'm finally working directly with one of these developers. Thankfully he is leaving next week on his own volition. My work load has doubled over the past several weeks due to rewriting most of his code. And needing to extensively vet code that he's still submitting. The worst part is that he's reasonably productive at producing lines and lines of code that barely function.
I am curious. Does your organization not use TDD ? How would the code be allowed to marge without running your test suite ?
In my experience, I have found it is more productive to coach and mentor developers to adapt TDD and DevOps practices than to vet their code. Instead of vetting their code, I would incorporate statistical code analysis and vulnerability analysis into the build system.
Mutation testing can help evaluate the quality of a test suite. The application code is automatically tweaked and tests run. If a mutant is not "killed," then additional tests may be needed.
> coach and mentor developers [rather than] vet their code.
How do you teach these things concretely without discussing specific code? How do you tell if the lessons are sticking without checking their future work?
Aside from that, neither TDD nor DevOps practices will get you idiomatic (relative to internally and externally) code, documentation, non-requirement performance worth a damn, test suites that are any good to begin with, etc. etc.. If you're running through a backlog of CRUD-ish features or whatever maybe those don't matter, though then I also wonder why the need for TDD instead of just a good CI pipeline.
Not enough. I'm trying to do this with our new code base, but it's difficult when I need to get everyone else on board. I'm trying to lead by example, but I feel like I'm also racing against other devs to get good feature work in before bad code gets in. The rest of the devs will simply copy what the existing code does instead of figuring how how to do something properly. Doubly frustrating that I don't have a senior title or pay, but am hiring and cleaning up after senior devs.
This is also a sign that code reviews are not working properly (either missing due to a team too small; or not enough time invested to do them properly; or people are not "free" enough to tell their coworker that their code is bad; or it was done too late, once there isn't enough time to fix it; etc).
Or code reviews are working fine but there are no long-term consequence for people whose code consistently takes 10x more revision after reviewing. (This is also a kind of organizational failure, but one where reprimanding the IC in question can still be the right response. But I also doubt a drive-by ad hoc external review of every person in the company is going to be the best approach to find this!)
I ran across some developers (in multiple orgs over the years) that would produce large amount of almost purposefully unmaintainable code. Yes, they were “productive”, and no, the stuff they produced made no difference and was a waste (both features wise and code wise since). 100% of these codebases turned out to be unsalvageable and were rewritten. It just would usually take orgs many months, usually after such developers leave to realize the complete and utter waste they left.
Not saying there are no slackers, it’s just productive devs are not necessarily those that produce the most of code.