Relatable story. Stepped away from "proper" development for a few years to focus on other work, and AI assistants have genuinely changed how quickly you can go from idea to something that works.
The key shift for me: I used to spend hours stuck on syntax, fighting with build systems, or searching StackOverflow for obscure errors. Now that friction is mostly gone. The actual thinking - what to build, how the pieces fit together, what edge cases matter - is still entirely human. But the translation from "I know what I want this to do" to "working code" is dramatically faster.
The compound interest calculator is a good example of something that would've felt like a weekend project a few years ago but probably took you a couple of hours. That's the unlock - not that AI writes code for you, but that the tedious parts stop blocking the interesting parts.
What surprised me most was how much architectural intuition I'd retained even after years away. The fundamentals don't decay as fast as the syntax knowledge.
Similar path here - studied physics, worked in accounting/finance for years, hadn't shipped code in forever. The thing that clicked for me wasn't the AI itself but realising my domain knowledge had actually been compounding the whole time I wasn't coding.
The years "away" gave me an unusually clear picture of what problems actually need solving vs what's technically interesting to build. Most devs early in their careers build solutions looking for problems. Coming back after working in a specific domain, I had the opposite - years of watching people struggle with the same friction points, knowing exactly what the output needed to look like.
What I'd add to the "two camps" discussion below: I think there's a third camp that's been locked out until now. People who understand problems deeply but couldn't justify the time investment to become fluent enough to ship. Domain experts who'd be great product people if they could prototype. AI tools lower the floor enough that this group can participate again.
The $100 spent on Opus to build 60 calculators is genuinely good ROI compared to what that would have cost in dev hours, even for someone proficient. That's not about AI replacing developers - it's about unlocking latent capability in people who already understand the problem space.
Honestly, I feel as though LLMs have actually changed the way we write posts, especially if a person uses them a lot. For instance, I cannot imagine why someone would use an LLM to reply to a random post, and that sentence does read more like a mix of LLM and human writing.
Turing Test is not really science (an infallible test, measurable outcome). An AI might never be able to pass TT for all humans. Just gets to be a high-def AI. Makes TT a technology.
The rearview mirror analogy is spot on. Traditional budgeting tells you what happened, cash flow forecasting tells you when you'll actually hit the wall.
The tricky bit is handling irregular income and timing uncertainty. Are you modeling confidence intervals or just point estimates? Most forecasting tools I've seen fall apart when income isn't predictable - would be curious how you're approaching that.
thanks for comment! i'm tracking both irregular and regular income/expense via common frequencies: weekly, bi-weekly, semi-monthly, monthly, quarterly, annually. Then that income/expense shows up appropriately on the Money Calendar with projected running balance. so even if irregular (either income or expense) you can easily see it play out in the calendar. so nothing too fancy on the technical end. user just needs to pick a frequency and/or insert one-off income/expenses as needed. I use this obsessively and find it works well.
This is exactly why I've avoided raising so far. Not because VC money is bad - obviously it enables things that wouldn't otherwise be possible - but because I know myself well enough to recognise I'd react exactly like this.
The author nails it: "I started to actually operate in a way that is counterproductive for my startup, while thinking I was actually doing what was best." That's the dangerous part. The pressure doesn't announce itself as pressure. It masquerades as ambition, urgency, drive.
Bootstrapping has its own version of this though. Instead of investor expectations, you've got the slow burn of "am I wasting years of my life on something that needs capital to work?" The grass is always greener. At least with VC money, you can move fast and find out if you're wrong. With bootstrapping, you can spend 3 years proving out something that would have taken 6 months with proper funding.
Neither path is inherently better. But knowing which one will fuck with your head less is worth figuring out before you're in the middle of it.
If you can bootstrap, bootstrap. That's my advice.
You might be able to move faster with VC money, depends on your product. But getting that VC money can break you. And now you're on the hook and you've lost full control.
If they hadn't taken the money, what would the counterfactual article have said?
Everybody creates narritives and belief-systems, where causes and effects seem so clear.
Perhaps I'm far too skeptical about their self-analysis. I've met very few people where their own analysis about themselves has matched what I have read in them. So many people misread their own minds and emotional drives. So I have learnt to cynically look for excuses and rationalisations and justifications.
The article is brilliant because we too rarely hear about people's doubts and negatives.
Some people (whether successful or not) do know themselves, but it is uncommon in my experience.
I helped bootstrap a business and we were in an incubator. There I saw some of the side-effects on businesses and founders from taking investment. We were just lucky that we couldn't be bothered with doing the distracting pony-show to get an investor onboard: we would have taken investment if it weren't so costly to do so.
With boot strapping at least you can de risk with consulting/freelancing. And I think with the new generation of software development tools it’s much easier to validate the core business problems without grinding out code for weeks on end.
The pattern I've noticed building tooling for accountants: automation rarely removes jobs, it changes what the job looks like.
The bookkeepers I work with used to spend hours on manual data entry. Now they spend that time on client advisory work. The total workload stayed the same - the composition shifted toward higher-value tasks.
Same dynamic played out with spreadsheets in the 80s. Didn't eliminate accountants - it created new categories of work and raised expectations for what one person could handle.
The interesting question isn't whether developers will be replaced but whether the new tool-augmented developer role will pay less. Early signs suggest it might - if LLMs commoditise the coding part, the premium shifts to understanding problems and systems thinking.
I would add on that the most of the premium of a modern SWE has always been on understanding problems and systems thinking. LLMs raise the floor and the ceiling, to where the vast majority of it will now be on systems and relationships
Machine learning is nothing like integer programming. It is an emulation of biological learning, it is designed explicitly to tackle the same problems human minds excel at. It is an organism in direct competition with human beings. Nothing can be more dangerous than downplaying this.
This is because the demand for most of what accountants do is driven by government regulations and compliance. Something that always expands to fill the available budget.
The pattern that gets missed in these discussions: every "no-code will replace developers" wave actually creates more developer jobs, not fewer.
COBOL was supposed to let managers write programs. VB let business users make apps. Squarespace killed the need for web developers. And now AI.
What actually happens: the tooling lowers the barrier to entry, way more people try to build things, and then those same people need actual developers when they hit the edges of what the tool can do. The total surface area of "stuff that needs building" keeps expanding.
The developers who get displaced are the ones doing purely mechanical work that was already well-specified. But the job of understanding what to build in the first place, or debugging why the automated thing isn't doing what you expected - that's still there. Usually there's more of it.
Classic Jevons Paradox - when something gets cheaper the market for it grows. The unit cost shrinks but the number of units bought grows more than this shrinkage.
Of course that is true. The nuance here is that software isn’t just getting cheaper but the activity to build it is changing. Instead of writing lines of code you are writing requirements. That shifts who can do the job. The customer might be able to do it themselves. This removes a market, not grows one. I am not saying the market will collapse just be careful applying a blunt theory to such a profound technological shift that isn’t just lowering cost but changing the entire process.
You say that like someone that has been coding for so long you have forgotten what it's like to not know how to code. The customer will have little idea what is even possible and will ask for a product that doesn't solve their actual problem. AI is amazing at producing answers you previously would have looked up on stack overflow, which is very useful. It often can type faster that than I can which is also useful. However, if we are going to see the exponential improvements towards AGI AI boosters talk about we would have already seen the start of it.
When LLMs first showed up publicly it was a huge leap forward, and people assumed it would continue improving at the rate they had seen but it hasn't.
Exactly. The customer doesn't know what's possible, but increasingly neither do we unless we're staying current at frontier speed.
AI can type faster and answer Stack Overflow questions. But understanding what's newly possible, what competitors just shipped, what research just dropped... that requires continuous monitoring across arXiv, HN, Reddit, Discord, Twitter.
The gap isn't coding ability anymore. It's information asymmetry. Teams with better intelligence infrastructure will outpace teams with better coding skills.
That's the shift people are missing.
Hey, welcome to HN. I see that you have a few LLM generated comments going here, please don’t do it as it is mostly a place for humans to interact. Thank you.
No, I’m pretty sure the models are still improving or the harnesses are, and I don’t think that distinction is all that important for users. Where were coding agents at 2025? 2024? I’m pretty amazed by the improvements in the last few months.
I'm both amazed by the improvements, and also think they are fundamentally incremental at this point.
But I'm happy about this. I'm not that interested in or optimistic about AGI, but having increasingly great tools to do useful work with computers is incredible!
My only concern is that it won't be sustainable, and it's only as great as it is right now because the cost to end users is being heavily subsidized by investment.
>The customer will have little idea what is even possible and will ask for a product that doesn't solve their actual problem.
How do you know that? For tech products most of the users are also technically literate and can easily use Claude Code or whatever tool we are using. They easily tell CC specifically what they need. Unless you create social media apps or bank apps, the customers are pretty tech savvy.
One example is programmers who would code physics simulations that run in massive data. You need a decent amount of software engineering skills to maintain software like that but the programmer maybe has a BS in Physics but doesn’t really know the nuances of the actual algorithm being implemented.
With AI, probably you don’t need 95% of the programmers who do that job anyway. Physicists who know the algorithm much better can use AI to implement a majority of the system and maybe you can have a software engineer orchestrate the program in the cloud or supercomputer or something but probably not even that.
Okay, the idea I was trying to get across before I rambled was that many times the customer knows what they want very well and much better than the software engineer.
Yes, I made the same point. Customers are not as dumb as our PMs and Execs think they are. They know their needs more than us, unless its about social media and banks.
I agree. People forget that people know how to use computers and have a good intuition on what they are capable of. Its the programming task that many people cant do. Its unlocking users to solve their own problems again
Have you ever paid for software? I have, many times, for things I could build myself
Building it yourself as a business means you need to staff people, taking them away from other work. You need to maintain it.
Run even conservative numbers for it and you'll see it's pretty damn expensive if humans need to be involved. It's not the norm that that's going to be good ROI
No matter how good these tools get, they can't read your mind. It takes real work to get something production ready and polished out of them
You are missing the point. Who said anything about turning what they make into a “business”. Software you maintain merely for yourself has no such overhead.
There are also technical requirements, which, in practice, you will need to make for applications. Technical requirements can be done by people that can't program, but it is very close to programming. You reach a manner of specification where you're designing schemas, formatting specs, high level algorithms, and APIs. Programmers can be, and are, good at this, and the people doing it who aren't programmers would be good programmers.
At my company, we call them technical business analysts. Their director was a developer for 10 years, and then skyrocket through the ranks in that department.
I think it's like super insane people think that anyone can just "code" an app with AI and that can replace actual paid or established open-source software, especially if they are not a programmer or know how to think like one. It might seem super obvious if you work in tech but most people don't even know what an HTTP server is or what is pytho, let alone understanding best practices or any kind of high-level thinking regarding applications and code. And if you're willing to spend that time in learning all that, might as well learn programming as well.
AI usage in coding will not stop ofc but normal people vibe coding production-ready apps is a pipedream that has many issues independent of how good the AI/tools are.
I think this comment will not age well. I understand where you are coming from. You are missing the idea that infrastructure will come along to support vibe coding. You are assuming vibe coding as it stands today will not be improved. It will get to the point where the vibe coder needs to know less and less about the underlying construction of software.
The way I would approach writing specs and requirements as code would be to write a set of unit-tests against a set of abstract classes used as arguments of such unit-tests. Then let someone else maybe AI write the implementation as a set of concrete classes and then verify that those unit-tests pass.
I'm not sure how well that would work in practice, nor why such an approach is not used more often than it is. But yes the point is that then some humans would have to write such tests as code to pass to the AI to implement. So we would still need human coders to write those unit-tests/specs. Only humans can tell AI what humans want it to do.
The problem is that a sufficient black box description of a system is way more elaborate then the white box description of the system or even a rigorous description of all acceptable white boxes (a proof). Unit tests contain enough information to distinguish an almost correct system from a more correct one, but there is way more information needed to even arrive at the almost correct system. Also even the knowledge which traits likely separate an almost correct one from the correct one likely requires a lot of white box knowledge.
Unit tests are the correct tool, because going from an almost correct one to a correct one is hard, because it implies the failure rate to be zero and the lower you go the harder it is to reduce the failure rate any further. But when your constraint is not infinitesimal small failure rate, but reaching expressiveness fast, then a naive implementation or a mathematical model are a much denser representation of the information, and thus easier to generate. In practical terms, it is much easier to encode the slightly incorrect preconception you have in your mind, then try to enumerate all the cases in which a statistically generated system might deviate from the preconception you already had in your head.
“write a set of unit-tests against a set of abstract classes used as arguments of such unit-tests.”
An exhaustive set of use cases to confirm vibe AI generated apps would be an app by itself. Experienced developers know what subsets of tests are critical, avoiding much work.
I agree (?) that using AI vibe-coding can be a good way to prooduce a prototype for stakeholders to see if the AI-output is actually something they want.
The problem I see is how to evolve such a prototype to more correct specs, or changed specs in the future, because AI output is non-deterministic -- and "vibes" are ambiguous.
Giving AI more specs or modified specs means it will have to re-interpret the specs and since its output is non-deterministic it can re-interpret viby specs differently and thus diverge in a new direction.
Using unit-tests as (at least part of) the spec would be a way to keep the specs stable and unambiguous. If AI is re-interpreting the viby ambiguous specs, then the specs are unstable which measn the final output has hard-time converging to a stable state.
I've asked this before, not knowing much about AI-sw-development, whether there is an LLM that given a set of unit-tests, will generate an implementation that passes those unit-tests? And is such practice used commonly in the community, and if not why not?
> Experienced developers know what subsets of tests are critical, avoiding much work.
And, they do know this for the programs written by other experienced developers, because they know where to expect "linearity" and were to expect steps in the output function. (Testing 0, 1, 127, 128, 255, is important, 89 and 90 likely not, unless that's part of the domain knowledge) This is not necessarily correct for statistically derived algorithm descriptions.
That depends a bit on whether you view and use unit-tests for
a) Testing that the spec is implemented correctly, OR
b) As the Spec itself, or part of it.
I know people have different views on this, but if unit-tests are not the spec, or part of it, then we must formalize the spec in some other way.
If the Spec is not written in some formal way then I don't think we can automatically verify whether the implementation implements the spec, or not. (that's what the cartoon was about).
> then we must formalize the spec in some other way.
For most projects, the spec is formalized in formal natural language (like any other spec in other professions) and that is mostly fine.
If you want your unit tests to be the spec, as I wrote in https://news.ycombinator.com/item?id=46667964, there would be quite A LOT of them needed. I rather learn to write proofs, then try to exhaustively list all possible combinations of a (near) infinite number of input/output combinations. Unit-tests are simply the wrong tool, because they imply taking excerpts from the library of all possible books. I don't think that is what people mean with e.g. TDD.
What the cartoon is about is that any formal(-enough) way to describe program behaviour will just be yet another programming tool/language. If you have some novel way of program specification, someone will write a compiler and then we might use it, but it will still be programming and LLMs ain't that.
Anecdote: I have decades of software experience, and am comfortable both writing code myself and using AI tools.
Just today, I needed a basic web application, the sort of which I can easily get off the shelf from several existing vendors.
I started down the path of building my own, because, well, that's just what I do, then after about 30 minutes decided to use an existing product.
I have hunch that, even with AI making programming so much easier, there is still a market for buying pre-written solutions.
Further, I would speculate that this remains true of other areas of AI content generation. For example, even if it's trivially easy to have AI generate music per your specifications, it's even easier to just play something that someone else already made (be it human-generated or AI).
I've heard that SASS never really took off in China because the oversupply of STEM people have caused developer salaries to be suppressed so low that companies just hire a team of devs to build out all their needs in house. Why pay for a SASS when devs are so cheap. These are just anecdotes. Its hard for me to figure out whats really going on in China.
What if AI brings the China situation to the entire world? Would the mentality shift? You seem to be basing it on the cost benefit calculations of companies today. Yes, SASS makes sense when you have developers (many of which could be mediocre) who are so expensive that it makes more sense to just pay a company who has already gone through the work of finding good developers and spend the capital to build a decent version of what you are looking for vs a scenario where the cost of a good developer has fallen dramatically and so now you can produce the same results with far less money (a cheap developer(does not matter if they are good or mediocre) guiding an AI). That cheap developer does not even have to be in the US.
> I've heard that SASS never really took off in China because the oversupply of STEM people have caused developer salaries to be suppressed so low that companies just hire a team of devs to build out all their needs in house. Why pay for a SASS when devs are so cheap. These are just anecdotes. Its hard for me to figure out whats really going on in China.
At the high end, china pays SWEs better than South Korea, Japan, Taiwan, India, and much Europe, so they attract developers from those locations. At the low end, they have a ton of low to mid-tier developers from 3rd tier+ institutions that can hack well enough. It is sort of like India: skilled people with credentials to back it up can do well, but there are tons of lower skilled people with some ability that are relatively cheap and useful.
China is going big into local LLMs, not sure what that means long term, but Alibaba's Qwen is definitely competitive, and its the main story these days if you want to run a coding model locally.
Thank you for the insight. Those countries you listed are nowhere near US salaries. I wonder what the SASS market is like in Europe? I hear its utilized but that the problem is that there is too much reliance on American companies.
I hear those other Asian countries are just like China in terms of adoption.
>China is going big into local LLMs, not sure what that means long term, but Alibaba's Qwen is definitely competitive, and its the main story these days if you want to run a coding model locally.
It seems like the China's strategy of low cost LLM applied pragmatically to all layers of the country's "stack" is the better approach at least right now. Here in the US they are spending every last penny to try and build some sort of Skynet god. If it fails well I guess the Chinese were right after all. If it succeeds well, I don't know what will happen then.
When I worked in China for Microsoft China, I was making 60-70% what I would have made back in the US working the same job, but my living expenses actually kind of made up for that. I learned that most of my non-Chinese asian colleagues were in it for the money instead of just the experience (this was basically my dream job, now I have to settle for working in the states for Google).
> It seems like the China's strategy of low cost LLM applied pragmatically to all layers of the stack is the better approach at least right now. Here in the US they are spending every last penny to try and build some sort of Skynet god. If it fails well I guess the Chinese were right after all. If it succeeds well, I don't know what will happen then.
China lacks those big NVIDIA GPUs that were sanctioned and now export tariffed, so going with lower models that could run on hardware they could access was the best move for them. This could either work out (local LLM computing is the future, and China is ahead of the game by circumstance) or maybe it doesn't work out (big server-based LLMs are the future and China is behind the curve). I think the Chinese government would have actually preferred centralization control, and censorship, but the current situation is that the Chinese models are the most uncensored you can get these days (with some fine tuning, they are heavily used in the adult entertainment industry...haha socialist values).
I wouldn't trust the Chinese government to not do Skynet if they get the chance, but Chinese entrepreneurs are good at getting things done and avoiding government interference. Basically, the world is just getting lucky by a bunch of circumstances ATM.
Fair point! And I wasn't clear: my anecdote was me, personally, needing an instance of some software. Rather than me personally either write it by hand, or even write it using AI, and then host it, I just found an off-the-shelf solution that worked well enough for me. One less thing I have to think about.
I would agree that if the scenario is a business, to either buy an off-the-shelf software solution or pay a small team to develop it, and if the off-the-shelf solution was priced high enough, then having it custom built with AI (maybe still with a tiny number of developers involved) could end up being the better choice. Really all depends on the details.
Does that automatically translate into more openings for the people whose full time job is providing that thing? I’m not sure that it does.
Historically, it would seem that often lowering the amount of people needed to produce a good is precisely what makes it cheaper.
So it’s not hard to imagine a world where AI tools make expert software developers significantly more productive while enabling other workers to use their own little programs and automations on their own jobs.
In such a world, the number of “lines of code” being used would be much greater that today.
But it is not clear to me that the amount of people working full time as “software developers“ would be larger as well.
> Does that automatically translate into more openings for the people whose full time job is providing that thing?
Not automatically, no.
How it affects employment depends on the shapes of the relevant supply/demand curves, and I don't think those are possible to know well for things like this.
For the world as a whole, it should be a very positive thing if creating usable software becomes an order of magnitude cheaper, and millions of smart people become available for other work.
Given the products that the software industry is largely focused on building (predatory marketing for the attention economy and surveillance), this unfortunately may be the case.
I debate this in my head way to much & from each & every perspective.
Counter argument - if what you say is true, we will have a lot more custom & personalized software and the tech stacks behind those may be even more complicated than they currently are because we're now wanting to add LLMs that can talk to our APIs. We might also be adding multiple LLMs to our back ends to do things as well. Maybe we're replacing 10 but now someone has to manage that LLM infrastructure as well.
My opinion will change by tomorrow but I could see more people building software that are currently experts in other domains. I can also see software engineers focusing more on keeping the new more complicated architecture being built from falling apart & trying to enforce tech standards. Our roles may become more infra & security. Less features, more stability & security.
Jevon's Paradox does not last forever in a single sector, right? Take manufacturing business for example. We can make more and more stuff with increasingly lower price, yet we ended up outsourcing our manufacturing and the entire sector withered. Manufacturing also gets less lucrative over the years, which means there has been less and less demand of labor.
You're right. I updated it to "in a single sector". The context is about the future demand of software engineers, hence I was wondering if it would be possible that we wouldn't have enough demand for such profession, despite that the entire society will benefit for the dropping unit cost and probably invented a lot of different demand in other fields.
I'm quite convinced that software (and, more broadly, implementing the systems and abstractions) seems to have virtually unlimited demand. AI raises the ceiling and broadens software's reach even further as problems that previously required some level of ingenuity or intelligence can be automated now.
Jevons paradox is the stupid. What happened in the past is not a guarantee for the future. If you look at the economy, you would struggle to find buyers for any slop AI can generate, but execs keep pushing it. Case in point the whole Microslop saga, where execs start treating paying customers as test subjects to please the share holders.
A good example is Many users looking to ditch Windows for Linux due to AI integrations and generally worse user experience. Is this the year of linux desktop?
> Classic Jevons Paradox - when something gets cheaper the market for it grows. The unit cost shrinks but the number of units bought grows more than this shrinkage.
That's completely disconnected from whether software developer salaries decrease or not, or whether the software developer population decreases or not.
The introduction of the loom introduced many many more jobs, but these were low-paid jobs that demanded little skill.
All automation you can point to in history resulted in operators needing less skill to produce, which results in less pay.
There is no doubt (i.e. I have seen it) that lower-skilled folk are absolutely going to crush these elitists developers who keep going on about how they won't be affected by automated code-generation, it will only be those devs that are doing unskilled mechanical work.
Sure - because prompting requires all that skill you have? Gimme a break.
This suggests that the latent demand was a lot but it still doesnt prove it is unbounded.
At some point the low hanging automation fruit gets tapped out. What can be put online that isnt there already? Which business processes are obviously going to be made an order magnitude more efficient?
Moreover, we've never had more developers and we've exited an anomalous period of extraordinarily low interest rates.
Yep, the current crunch experienced by developers is massively (but not exclusivly) on younger less experienced developers.
I was working with developer training for a while some 5-10 years back and already then I was starting to see some signs of an incoming over-saturation, the low interest rates probably masked much of it due to happy go lucky investments sucking up developers.
Low hanging and cheap automation,etc work is quickly dwindling now, especially as development firms are searching out new niches when the big "in-IT" customers aren't buying services inside the industry.
Luckily people will retire and young people probably aren't as bullish about the industry anymore, so we'll probably land in an equilebrium, the question is how long it'll take, because the long tail of things enabled by the mobile/tablet revolution is starting to be claimed.
Look at traditional manufacturing. Automation has made massive inroads. Not as much of the economy is directly supporting (eg, auto) manufacturers as it used to be (stats check needed). Nevertheless, there are plenty of mechanical engineering jobs. Not so many lower skill line worker jobs in the US any more, though. You have to ask yourself which category you are in (by analogy). Don’t be the SWE working on the assembly line.
Pre industrial revolution something like 80+ percent of the population was involved in agriculture. I question the assertion of more farmers now especially since an ever growing percentage of farms are not even owned by corporeal entities never mind actual farmers.
ooohhh I think I missed the intent of the statement... well done!
80% of the world population back then is less than 50% of the current number of people working in farming, so the assertion isn’t wrong, even if fewer people are working on farming proportionally (as it should be, as more complex, desirable and higher paid options exist)
i don't think you missed it. Perhaps sarcasm, but the main comment is specifically about programming and seems so many sub comments want to say "what about X" that's nothing to do with programming.
The machinery replaced a lot of low skill labor. But in its wake modern agriculture is now dependent on high skill labor. There are probably more engineers, geologists, climatologists, biologists, chemists, veterinarians, lawyers, and statisticians working in the agriculture sector today than there ever were previously.
Key difference being that there is only a certain amount of food that a person can physically eat before they get sick.
I think it’s a reasonable hypothesis that the amount of software written if it was, say, 20% of its present cost to write it, would be at least 5x what we currently produce.
Is that farm hands, or farm operators? What about corps, how do you calibrate that? Is a corp a "person" or does it count for more? My point is that maybe the definition of "farmer" is being pushed to far, as is the notion of "developer". "Prompt engineer"? Are you kidding me about that? Prompts being about as usefully copyrighted / patentable as a white paper. Do you count them as "engineers" because they say so?
I get your point, hope you get mine: we have less legal entities operating as "farms". If vibe coding makes you a "developer", working on a farm in an operating capacity makes you a "farmer". You might profess to be a biologist / agronomist, I'm sure some owners are, but doesn't matter to me whether you're the owner or not.
The numbers of nonsupervisory operators in farming activities have decreased using the traditional definitions.
If AI tools make expert developers a lot more productive on large software projects, while empowering non-developers to create their own little programs and automations, I am not sure how that would increase the number of people with “software developer” as their full-time job.
It happened with tools like Excel, for example, which matches your description of empowering non-developers. It happens with non-developers setting up a CMS and then, when hitting the limits of what works out of the box, hiring or commissioning developers to add more complex functions and integrations. Barring AGI, there will always be limitations, and hitting them induces the desire to go beyond.
> when hitting the limits of what works out of the box, hiring or commissioning developers to add more complex functions and integrations.
You aren't going to going to do that to AI systems. If, after a couple of weeks you hit the limit of what the AI could do in a million+ LoC, you aren't going to be able to hire a human dev to modify or replace that system for you, because:
1. Humans are going to be needing a ramp up time and that's damn costly (even more costly when there are fewer of them).
2. Where are you going to find humans who can actually code anymore if everyone has been doing this for the last 10 years?
> So, what do you propose non-developers in that situation will be doing then?
Look, I dunno what they will do, but these options are certainly off the table:
1. Get a temp dev/team in to patch a 1m SloC mess
2. Do it cost-effectively.
If the tech has improved by the time this happens (I mean, we're nowhere near this scenario yet, and it has already plateaued) then perhaps they can get the LLM itself to simply rewrite it instead of spending all those valuable tokens reading it in and trying to patch it.
If the tech is not up to it, then their options are effectively:
There’s only so much land and only so much food we need to eat. The bounds on what software we need are much wider. But certainly there is a limit there as well.
I think the better example is the mechanization of the loom created a huge amount of jobs in factories relative to the hand loom because the demand for clothing could not be met by the hand loom.
The craftsman who were forced to go to the factory were not paid more or better off.
There is not going to be more software engineers in the future than there is now, at least not in what would be recognizable as software engineering today. I could see there being vastly more startups with founders as agent orchestrators and many more CTO jobs. There is no way there is many more 2026 version of software engineering jobs at S&P 500 companies in the future. That seems borderline delusional to me.
Wait what? There are way less farmers than we had in the past. In many parts of the world, every member of the family was working on the farm, and now only 1 person can do the work of 5-10 people.
I felt like the article had a good argument for why the AI hype will similarly be unsuccessful at erasing developers.
> AI changes how developers work rather than eliminating the need for their judgment. The complexity remains. Someone must understand the business problem, evaluate whether the generated code solves it correctly, consider security implications, ensure it integrates properly with existing systems, and maintain it as requirements evolve.
What is your rebuttal to this argument leading to the idea that developers do need to fear for their job security?
LLM's don't learn on their own mistakes in the same way that real developers and businesses do, at least not in a way that lends itself to RLVR.
Meaningful consequences of mistakes in software don't manifest themselves through compilation errors, but through business impacts which so far are very far outside of the scope of what an AI-assisted coding tool can comprehend.
> through business impacts which so far are very far outside of the scope of what an AI-assisted coding tool can comprehend.
That is, the problems are a) how to generate a training signal without formally verifiable results, b) hierarchical planning, c) credit assignment in a hierarchical planning system. Those problems are being worked on.
There are some preliminary research results that suggest that RL induces hierarchical reasoning in LLMs.
> evaluate whether the generated code solves it correctly, consider security implications, ensure it integrates properly with existing systems, and maintain it as requirements evolve
I think you are basing your reasoning on the current generation of models. But if future generation will be able to do everything you've listed above, what work will be there left for developers? I'm not saying that we will ever get such models, just that when they appear, they will actually displace developers and not create more jobs for them.
The business problem will be specified by business people, and even if they get it wrong it won't matter because iteration will be quick and cheap.
> What is your rebuttal to this argument leading to the idea that developers do need to fear for their job security?
The entire argument is based on assumption that models won't get better and will never be able to do things you've listed! But once they become capable of these things - what work will be there for developers?
It's not obvious at all. Some people believe that once AI can do the things I've listed, the role of developers will change instead of getting replaced (because advances always led to more jobs, not less).
We are actually already at the level of magic genie or some sci-fi level device. It can't do anything obviously but what it can is mind blowing. And the basis of argument is obviously right - potential possibility is really low bar to pass and AGI is clearly possible.
A $3 calculator today is capable of doing arithmetic that would require superhuman intelligence to do 100 years ago.
It's extremely hard to define "human-level intelligence" but I think we can all agree that the definition of it changes with the tools available to humans. Humans seem remarkably suited to adapt to operate at the edges of what the technology of time can do.
> that would require superhuman intelligence to do 100 years ago
It had required a ton of ordinary intelligence people doing routine work (see Computer(occupation)). On the other hand, I don't think anyone has seriously considered to replace, say, von Neumann with a large collective of laypeople.
My argument would be that while some complexity remains, it might not require a large team of developers.
What previously needed five devs, might be doable by just two or three.
In the article, he says there are no shortcuts to this part of the job. That does not seem likely to be true. The research and thinking through the solution goes much faster using AI, compared to before where I had to look up everything.
In some cases, agentic AI tools are already able to ask the questions about architecture and edge cases, and you only need to select which option you want the agent to implement.
There are shortcuts.
Then the question becomes how large the productivity boost will be and whether the idea that demand will just scale with productivity is realistic.
Of course in that case it will not happen this time. However, in that case software dev getting automated would concern me less than the risk of getting turned into some manner of office supply.
Imo as long as we do NOT have AGI, software-focused professional will stay a viable career path. Someone will have to design software systems on some level of abstraction.
>every "no-code will replace developers" wave actually creates more developer jobs, not fewer
you mean "created", past tense. You're basically arguing it's impossible for technical improvements to reduce the number of programmers in the world, ever. The idea that only humans will ever be able to debug code or interpret non-technical user needs seems questionable to me.
Also the percentage of adults working has been dropping for a while. Retired used to be a tiny fraction of the population that’s no longer the case, people spend more time being educated or in prison etc.
Overall people are seeing a higher standard of living while doing less work.
Efficiency is why things continue to work as fewer people work. Social programs, bank account, etc are just an abstraction you need a surplus or the only thing that changes is who starves.
Social programs often compensate for massive distortion in the economy. For example, SNAP benefits both the poor and the businesses where SNAP funds is spent on, but that's because a lot of unearned income goes to landowners, while preventing people from employing laborers and starting businesses. SNAP merely ameliorate a situation that shouldn't had arise in the first place.
So, yes, reasons other than efficiency explain why people aren't working, as well why there are still poor people.
Millions of working Americans don’t have cars. Also, you can make the median wage in the US without any collage education.
Poverty still exists, but vast inflation of what is considered’a basic standard of living’ hides a great deal of progress. People want to redefine illiteracy to mean being unable to use the internet not by the standards of the past.
Yes it is, if we still needed 90+% of the population to work or people starved that’s a self correcting system. Less than that work and you have less people.
You can argue about all the many secondary reasons for each of the different groups (retirees, prisoners, etc), but only one thing is required for every group.
>COBOL was supposed to let managers write programs. VB let business users make apps. Squarespace killed the need for web developers. And now AI.
The first line made me laugh out loud because it made me think of an old boss who I enjoyed working with but could never really do coding. This boss was a rockstar at the business side of things and having worked with ABAP in my career, I couldn't ever imagine said person writing code in COBOL.
However the second line got me thinking. Yes VB let business users make apps(I made so many forms for fun). But it reminded me about how much stuff my boss got done in Excel. Was a total wizard.
You have a good point in that the stuff keeps expanding because while not all bosses will pick up the new stack many ambitious ones will. I'm sure it was the case during COBOL, during VB and is certainly the case when Excel hit the scene and I suspect that a lot of people will get stuff done with AI that devs used to do.
>But the job of understanding what to build in the first place, or debugging why the automated thing isn't doing what you expected - that's still there. Usually there's more of it.
Honestly this is the million dollar question that is actually being argued back and forth in all these threads. Given a set of requirements, can AI + a somewhat technically competent business person solve all the things a dev used to take care of? Its possible, im wondering that my boss who couldn't even tell the difference between React and Flask could in theory...possibly with an AI with a large enough context overcomes these mental model limitations. Would be an interesting experiment for companies to try out.
Many business people I've worked with are handy with SQL, but couldn't write e.g. go or python, which always surprised me. IMO SQL is way more inconsistent and has a mental model far more distant from real life than common imperative programming (which simply parallels e.g. a cookbook recipe).
I find SQL becomes a "stepping stone" to level up for people who live and breathe Excel (for obvious reasons).
Now was SQL considered some sort of tool to help business people do more of what coders could do? Not too sure about that. Maybe Access was that tool and it just didn't stick for various reasons.
I knew a guy like that, except his tool of choice was Access. He could code, but it wasn't his strong suit, and when he was out of his element he typically delegated those responsibilities to more technical programmers, including sometimes myself. But with Access he could model a business with tables, and wire it together with VBA business logic, as easily as you and I breathe.
In the face of productivity increase and lower barrier of entry, other professionals move to capture the increase in productivity for their own members and erect barriers to prevent others from taking their tasks. In IT, we celebrate how our productivity increase benefited the broader economy, how more people in other roles could now build stuff, with the strong belief that employment of developers and adjacent roles will continue to increase and how we could get those new roles.
> The total surface area of "stuff that needs building" keeps expanding.
I certainly hope so, but it depends on whether we will have more demand for such problems. AI can code out a complex project by itself because we humans do not care about many details. When we marvel that AI generates a working dashboard for us, we are really accepting that someone else has created a dashboard that meets our expectation. The layout, the color, the aesthetics, the way it interacts, the time series algorithms, and etc. We don't care, as it does better than we imagined. This, of course, is inevitable, as many of us do spend enormous time implementing what other people have done. Fortunately or unfortunately, it is very hard to human to repeat other people's work correctly, but it's a breeze for AI. The corollary is that AI will replace a lot of demand on software developers, if we don't have big enough problems to solve -- in the past 20 years we have internet, cloud, mobile, and machine learning. All big trends that require millions and millions of brilliant minds. Are we going to have the same luck in the coming years, I'm not so sure.
I think there's a parallel universe with things like system administration. I remember people not valuing windows sysadmins (as opposed to unix), because all the stuff was gui-based. lol.
Yeah I feel like the better description is that the definition of "developer" expands each time to include each new set of "people who take advantage of the ability to write software to do their jobs".
> The developers who get displaced are the ones doing purely mechanical work that was already well-specified.
And that hits the offshoring companies in India and similar countries probably the most, because those can generally only do their jobs well if everything has been specified to the detail.
But sign painting isn't programming? The comment is insightful and talks specifically of low and no code options creating more need for developers. Great point. has nothing to do with non programming jobs.
Of course this is true. Just like the need to travel long distances over land will never disappear.
The skills needed to be a useful horseman though have almost nothing to do with the skills needed to be a useful train conductor. Most the horseman skills don't really transfer other than being in the same domain of land travel. The horseman also has the problem that they have invested their life and identity into their skill with horses. It massively biases perspective. The person with no experience with horses actually has some huge advantages of the beginner mind in terms of travel by land at the advent of travel by rail.
The ad nauseam software engineer "horsemen" arguments on this board that there will always be the need to travel long distance by land completely misses the point IMO.
Well, if we’re comparing all jobs to all other jobs - then you may have a valid point. Otherwise, we should probably focus on comparing complexity and supply/demand for the skills and output being spoken about.
this works for small increments in skill or small shifts in adjacent skills.
imagine being an engineer educated in multiple instruction sets: when compilers arrive on the scene it sure makes their job easier, but that does not retroactively change their education to suddenly have all the requisite mathematics and domain knowledge of say algorithms and data structures.
what is euphemistically described as a "remaining need for people to design, debug and resolve unexpected behaviors" is basically a lie by omission: the advent of AI does not automatically mean previously representative human workers suddenly will know higher level knowledge in order to do that. it takes education to achieve that, no trivial amount of chatbotting will enable displaced human workers to attain that higher level of consciousness. perhaps it can be attained by designing software that uploads AI skills to humans...
> lowers the barrier to entry, way more people try to build things, and then those same people need actual developers when they hit the edges of what the tool can do
I was imagining companies expanding the features they wanted and was skeptical that would be close to enough, but this makes way more sense
The contract stability bit rings true from my experience. I've built a few B2B integrations that follow this pattern naturally - the data layer and API contracts are rock solid because changing them means coordinating with external systems, but the application logic gets rewritten fairly often.
Where it gets messy is when your "disposable" layer accumulates implicit contracts. A dashboard that stakeholders rely on, an export format someone's built a process around, a webhook payload shape that downstream systems expect. These aren't in your documented interfaces but they become load-bearing walls.
The discipline required is treating your documented contracts like the actual boundary - version them properly, deprecate formally, keep them minimal. Most teams don't have that discipline and end up with giant surface areas where everything feels permanent.
anything exposed for others to depend on becomes part of the actual boundary. if it might break someone's system when you change it's part of your API.
the problem is not in documenting the subset of a giant surface you intend to support; the problem is having a giant surface!
The tricky thing about "data is the only moat" is that it depends heavily on what kind of data you're talking about.
Proprietary training data for foundation models? Sure, that's a real moat - until someone figures out how to generate synthetic equivalents or a new architecture makes your dataset less relevant.
But the more interesting moat is often contextual data - the stuff that accumulates from actual usage. User preferences, correction patterns, workflow-specific edge cases. That's much harder to replicate because it requires the product to be useful enough that people keep using it.
The catch is you need to survive long enough to accumulate it, which usually means having some other differentiation first. Data as a moat is less of a starting position and more of a compounding advantage once you've already won the "get people to use this thing" battle.
Building in a niche B2B space and this resonates. The data moat isn't just volume though - it's the accumulated understanding of edge cases.
In my domain, every user correction teaches the system something new about how actual businesses operate vs how you assumed they did when you wrote the first version. Six months of real usage with real corrections creates something a competitor can't just replicate by having more compute or a bigger training set.
The tricky part is that this kind of moat is invisible until you try to build the same thing. From the outside it looks simple. From the inside you're sitting on thousands of learned exceptions that make the difference between "works on demos" and "works on real data."
We totally found this doing financial document analysis. It's so quick to do an LLM-based "put this document into this schema" proof-of-concept.
Then you run it on 100,000 real documents.
And so you find there actually are so, so many exceptions and special cases. And so begins the journey of constructing layers of heuristics and codified special cases needed to turn ~80% raw accuracy to something asymptotically close to 100%.
That's the moat. At least where high accuracy is the key requirement.
In case you haven't come across the idea yet, this concept is all the rage among the VC thoughtbois/gorls. Not sure if Jaya Gupta at Foundation coined or just popularized it but: context graph.
Could be a good fundraising environment for you if you find the zealots of this idea.
The launch doesn't matter nearly as much as the 12 months after it.
I've been building in a niche B2B space and the pattern I've noticed: products that stick around tend to have one thing in common - they're findable when someone has the exact problem they solve. Not before, not in general, but at the moment of pain.
So rather than trying to "stand out" broadly, I'd focus on being incredibly visible in the specific places your ideal users go when they're frustrated. For SaaS that's usually niche subreddits, specific Slack communities, the comment sections of blog posts about the problem you solve.
The other thing that works: just... not disappearing. Half the products launched last year are already gone or abandoned. If you're still shipping updates and engaging in year two, you've already outlasted most of your competition. Persistence is underrated as a marketing strategy.
The key shift for me: I used to spend hours stuck on syntax, fighting with build systems, or searching StackOverflow for obscure errors. Now that friction is mostly gone. The actual thinking - what to build, how the pieces fit together, what edge cases matter - is still entirely human. But the translation from "I know what I want this to do" to "working code" is dramatically faster.
The compound interest calculator is a good example of something that would've felt like a weekend project a few years ago but probably took you a couple of hours. That's the unlock - not that AI writes code for you, but that the tedious parts stop blocking the interesting parts.
What surprised me most was how much architectural intuition I'd retained even after years away. The fundamentals don't decay as fast as the syntax knowledge.
reply