Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even if you can’t automate away 100% of software developers, it doesn’t mean you can’t automate away the lower, moving, 80% of skills. This can have the same effect of automating away software development, from the perspective of the median of the labor market.


We’ve been automating programmers work as long as there’s been programming. Nobody implements their own loops anymore.

As the cost of programming comes down, the set of problems it can be profitably applied to grows. So far, that’s been increasing the demand for programmers whenever the cost comes down.


>Nobody implements their own loops anymore.

It’s funny how this is true in several ways:

* Nobody needs to manually move the stack pointer anymore to implement a loop at the lowest level

* Iverson ghosts [0] let us operate on arrays without explicit loop constructs, e.g. array.sum() in NumPy

* Modern IDEs autocomplete loop syntax

I’m sure there are many other examples I’m missing. All of these are examples of automation of programmers’ work, even if we often don’t think of them that way (especially #1 and #2: higher-level languages/syntax)

[0] https://dev.to/bakerjd99/numpy-another-iverson-ghost-9mc


You could extend that with examples where specific type of loop actions are even abstracted further, such as filtering, which is aided by (standard) libraries, but also by introducing specific syntax to the language (think lamda's / closure's)


Same with open source. The more open source exists, the easier it is to make more useful open source, which increases the surface area of profitable development, which fuels more open source.

1000 programmers is more than 1000x as valuable to the world as 1 programmer. A million may be more than 1000x as valuable as 1000.


The last statement is utterly wrong, and is wrong for most fields of endeavor.

One programmer on their own has the maximum possible productivity per person because there is no communication needed. The whole project sits inside one person's head.

Of course, real world programs usually need more than one.

Each new programmer that is added also adds a little overhead to the existing programmers.

More, once the project starts to get really large, you start to get new forms of time wasting.

In a project with 1000 programmers, there is bound to be a huge amount of duplication, quite a lot of programmers working at cross-purposes to other programmers that they don't even know exist.

And there will also be programmers who do nothing, and get away with it in the mass.

And this is particularly true in open source, where there isn't any strong management at the top trying to prevent duplicate work.


I took the parent poster to mean that 1000 more programmers working on different open source projects has a force multiplication effect.


Yes, that’s exactly what I meant. Thanks!


As if implementing loops are really time demanding. Time goes into solving problems. Thinking and trying. Writing code is only 10% of the total time spend. Cost of programming is not going down. It's going up in fact.


> Writing code is only 10% of the total time spend

Sure, now that we have more powerful, pipelined, branch-predicted computers and you don't have to optimize every last bit of your program, that's true.


Well, I came to programming somewhat late, in the 1970s, but even back then, the actual typing part was a small fraction of the total time spent.

Perhaps you mean the 1950s?


I don't recall stating a year.

"Actual typing part" != "writing code"


what's fascinating is that developers have been automating-away their jobs for decades with tools and libraries... so just add to the automation-pressure? or be some sea-change?

hard to know, economically-speaking...


Difficult to say. My bold prediction is that within the next two decades, bootcamps for web dev decline as the barrier to entry of that field increases as the field matures, despite pay being excellent.

How much less menial labor does one do with modern web frameworks relative to the state of the art in 2002?


I find the menial work is about the same.

Less was possible in 2002, so the menial tasks were different. But I think the split between interesting/novel code and boilerplate/menial work was about the same.


That is a bold prediction since people are constantly reinventing the web. For a few years it really looked like Macromedia Flash was going to be a thing to learn. These days it’s reactive UI components. Tomorrow who knows? I can’t see it ever really maturing as it’s not terribly hard to apply new paradigms to the web every few years that gain traction and move the state-of-the-art.


"field matures" is the operative phrase. Flash jumped in a void that was left by a pretty nonfunctional html experience. But it was proprietary, presumably was an energy hog, insecure, and was essentially a stand-alone experience that didn't play nicely with the browser. Reactive UI components are based on web standards, and it is VERY hard to imagine web standards (HTML, CSS, and JavaScript in particular) being replaced by anything in the near future. They are the "C" of the web for better or worse.

And yes I know WebAssembly is supposed to loosen JavaScript's hold on the web, but as far as I can see that has not happened yet even though every browser now supports it. I am not exactly sure why that is.


For one thing, the js VM, V8, is extremely good these days. I’ve been very curious about these questions, and in single thread contexts, V8 seems to beat JVM, with compute performance roughly equivalent to Rust in debug mode.

Since the bindings to the rest of the “essential” browser model are all geared towards js, the appropriate mental model might be that js is the “assembly” of the browser, even though it is not assembly in any meaningful compute/memory model sense.


tbh, and while knowing this argument is hard to push on most companies, « old web technologies » are just as relevant as one decade ago.

If I look at any project I’ve made for the last ten years with « hype technologies », REST APIs everywhere, complex front end frameworks (so complex that it’s literally a second codebase), they totally could have been developed with an old and mature framework like Django, RoR or ASP.Net.

Once those projects are done, you clearly can see that : things like sending a form are incredibly complex and non standard, ressources are wasted everywhere, and only 1 to 3 APIs are used externally, and you maintain the dozen others for your own usage.

The sole reason we are on more complex stack is because the industry magically acknowledged that « web development » was in fact two different jobs.


The one thing I like about “modern” JS-driven websites now versus then is, in general, interactive web software (e.g. with JS-driven interaction) is much more maintainable now.


> hard to know, economically-speaking...

It’s not hard to know at all, it’s a well studied area of economics.

https://en.wikipedia.org/wiki/Jevons_paradox


Ah, I remember the first time I discussed this with a friend. It was right before graduation, and we were wondering if we'd still have jobs in a few years.

I think that was 1982.

For me, a modern language like Python already removes 90% of the work I was doing in 1982, and templates and my editor do a lot.

It's really not clear there are huge speedups to be found there.


Yes, and in fact that kind of automation has been happening continually since the creation of the first stored-program electronic computers.


Yep, and the barrier to entry follows, IMO, a U shaped curve where a field gets easier to enter past the initial development, but then gets more difficult over time as it matures and the easiest tasks continually get whittled away, until increasing abstraction of tools requires sufficiently increased abstraction of thought.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: