by ezekg on 2/13/26, 7:13 PM with 545 comments
by jackfranklyn on 2/14/26, 1:06 AM
Before our tools: a bookkeeper spends 80% of their time on data entry and transaction categorisation, 20% on actually thinking about the numbers. After: those ratios flip. The bookkeeper is still there, still needed, but now they're doing the part that actually requires judgment.
The catch nobody talks about is the transition period. The people who were really good at the mechanical part (fast data entry, memorised category codes) suddenly find their competitive advantage has evaporated. And the people who were good at the thinking part but slow at data entry are suddenly the most valuable people in the room. That's a real disruption for real humans even if the total number of jobs stays roughly the same.
I think the "AI won't take your job" framing misses this nuance. It's not about headcount. It's about which specific skills get devalued and how quickly people can retool. In accounting at least, the answer is "slowly" because the profession moves at glacial speed.
by RevEng on 2/13/26, 9:30 PM
by gordonhart on 2/13/26, 8:00 PM
by ddtaylor on 2/13/26, 7:45 PM
Take even the most unskilled labor that people can think about such as flipping a burger at a restaurant like McDonald's. In reality that job is multiple different roles mixed into one that are constantly changing. Multiple companies have experimented with machines and robots to perform this task all with very limited success and none with any proper economics.
Let's be charitable and assume that this type of fast food worker gets paid $50,000 a year. For that job to be displaced it needs to be performed by a robot that can be acquired for a reasonable capital expenditure such as $200,000 and requires no maintenance, upkeep, or subscription fees.
This is a complete non-reality in the restaurant industry. Every piece of equipment they have cost them significant amounts and ongoing maintenance even if it's the most basic equipment such as a grill or a fryer. The reality is that they pay service technicians and professionals a lot of money to keep that equipment barely working.
by qgin on 2/13/26, 8:09 PM
If AI can do 80% of your tasks but fails miserably on the remaining 20%, that doesn't mean your job is safe. It means that 80% of the people in your department can be fired and the remaining 20% handle the parts the AI can't do yet.
by dakolli on 2/13/26, 11:45 PM
LLMs don't create anything new, they simply replace human computer i/o, with tokens. That's it, leaving the humans who are replaced to fight for a limited number of jobs. LLMs are not creating new jobs, they only create "AI automate {insert business process} SaaS" that are themselves heavily automated.. I suppose there are more datacenter jobs (for now), and maybe some new ML researcher positions.. but I don't really see job growth.. Are we supposed to just all go work at a datacenter or in the semiconductor industry (until they automate that too)?
by nphardon on 2/13/26, 7:49 PM
Having said that, it's hard to imagine jobs like mine (working on np-complete problems) existing if the LLMs continue advancing at the current rate, and its hard to imagine they wont continue to accelerate since they're writing themselves now, so the limitations of human ability are no longer a bottleneck.
by looneysquash on 2/13/26, 9:01 PM
Lest we forget, software engineers aren't exactly ordinary people: they make quite a bit above the median wage.
AI taking our jobs is scary because it will turn us into "ordinary people". And ordinary people are not ok. They're barely surviving.
by delegate on 2/13/26, 8:53 PM
Software engineers work on Jira tickets, created by product managers and several layers of middle managers.
But the power of recent models is not in working on cogs, their true power is in working on the entire mechanism.
When talking about a piece of software that a company produces, I'll use the analogy of a puzzle.
A human hierarchy (read: company) works on designing the big puzzle at the top and delegating the individual pieces to human engineers. This process goes back and forth between levels in the hierarchy until the whole puzzle slowly emerges. Until recently, AI could only help on improving the pieces of the puzzle.
Latest models got really good at working on the entire puzzle - big picture and pieces.
This makes human hierarchy obsolete and a bottleneck.
The future seems to be one operator working on the entire puzzle, minus the hierarchy of people.
Of course, it's not just about the software, but streams of information - customer support, bug tickets, testing, changing customer requirements.. but all of these can be handled by AI even today. And it will only get better.
This means different things depending on which angle you look at it - yes, it will mean companies will become obsolete, but also that each employee can become a company.
by febed on 2/13/26, 10:56 PM
by hi_hi on 2/14/26, 10:18 AM
I am worried about when they start wanting to make a profit on AI. I'm assuming we either have to pay the actual price for these things (I have no idea what that looks like, but I'm pretty sure it isn't $20 or $200 per month), or we have to put up with the full force advertising. Or most likely, we have to do both.
It'll be another one of those "I remember when..." stories we get to tell our kids. Like "I remember when emails were useful and exciting" or "I remember when I could order a taxi and it was clean, reliable and even came with a bottle of water..." or "I remember when I could have conversations with strangers on the internet that didn't instantly descend into arguments and hate".
by mbgerring on 2/14/26, 3:31 AM
One of the things that drove the tech boom in the 2010s was cloud computing driving the cost of starting an internet company into the ground.
What happens when there’s software you think should exist, and you no longer need to hire a bunch of people at $150k-$250k per year to build it?
by lemax on 2/14/26, 12:07 AM
by RS-232 on 2/13/26, 8:07 PM
At first, it's a pretty big energy hog and if you don't know how to work it, it might crash and burn.
After some time, the novelty wears off. More and more people begin using it because it is a massive convenience that does real work. Luddites who still walk or ride their bike out of principle will be mocked and scoffed.
Then the mandatory compliance will come. A government-issued license will be required to use it and track its use. This license will be tied to your identity and it will become a hard requirement for employment, citizenship, housing, loans, medical treatment, and more. Not having it will be a liability. You will be excluded from society at large if you do not comply.
Last will come the AI-integrated brain computer interface. You won't have any choice when machine-gun-wielding Optimus robots coral you into a self-driving Tesla bus to the nearest FEMA camp to receive your Starlink-connected Neuralink N1 command and control chip. You will be decapitated if you refuse the mark of the beast. Rev 20:4
by agentultra on 2/14/26, 5:52 AM
I’m more worried that even if these tools do a bad job people will be too addicted to the convenience to give them up.
Example: recruiters locked into an AI arms race with applicants. The application summaries might be biased and contain hallucinations. The resumes are often copied wholesale from some chat bot or other. Nobody wins, the market continues to get worse, but nobody can stop either.
by cal_dent on 2/13/26, 9:30 PM
Ai might not replace current work but it’s already replacing future hypothetical work. Now whether it can actually do that the job is besides the point in the short term. The way business models work is that if there’s an option to reduce your biggest cost (labour) you’d very much give it a go first. We might see a resurgence of labour if it turns out be all hype but for the short to medium term they’ll be a lot of disruption.
Think we’re already seeing that in employment data in the US, as new hiring and job creation slows. A lot of that will for sure be the current economic environment but I suspect (more so in tech focused industries) that will also be due to tech capex in place of headcount growth
by nphardon on 2/13/26, 8:45 PM
by ef2k on 2/13/26, 9:26 PM
It also argues that models have existed for years and we're yet to see significant job loss. That's true, but AI is only now crossing the threshold of being both capable and reliable enough to be automate common tasks.
It's better to prepare for the disruption than the sink or swim approach we're taking now in hopes that things will sort themselves out.
by gniv on 2/14/26, 9:48 AM
by ej88 on 2/13/26, 8:54 PM
for me the 2 main factors are:
1. whether your company's priority is growing or saving
- growing companies especially in steep competition fight for talent and ai productivity results in more hiring to outcompete
- saving companies are happy to cut jobs to save on margin due to their monopoly or pressure from investors
2. how 'sequence of tasks-like' your job is
- SOTA models can easily automate long running sequences of tasks with minimal oversight
- the more your job resembles this the more in-danger you are (customer service diffusion is just starting, but i predict this will be one of the first to be heavily disrupted)
- i'm less worried about jobs where your job is a 'role' that comes with accountability and requires you to think big picture on what tasks to do in the first place
by djfergus on 2/14/26, 3:53 AM
This is exactly what chess experts like Kasparov thought in the late 90s: “a grandmaster plus a computer will always beat just a computer”. This became false in less than a decade.
by 827a on 2/13/26, 8:31 PM
There will also be far fewer positions demanding these skills. Easy access to generating code has moved the bottleneck in companies to positions & skills that are substantially harder to hire for (basically: Good Judgement); so while adding Agentic Sorcerers would increase a team's code output, it might be the wrong code. Corporate profit will keep scaling with slower-growing team sizes as companies navigate the correct next thing to build.
by entech on 2/14/26, 9:43 AM
Does everyone really think that the world governments would allow any level job loss that would create panic before shutting this whole thing down within the area of their control?
It’s probably the western culture bias - people in UK or US have not seen or experienced big enough government intervention. US citizens are probably feeling a bit of the change now.
by ChrisArchitect on 2/13/26, 8:09 PM
by trilogic on 2/13/26, 7:56 PM
1 You are not affected somehow (you got savings, connections, not living paycheck to paycheck, and have food on the table).
2 You prefer to persue no troubles in matters of complexity.
Time will tell, is showing it already.
by disfictional on 2/14/26, 7:23 AM
by Flavius on 2/13/26, 7:36 PM
by Davidzheng on 2/13/26, 7:55 PM
by Nevermark on 2/13/26, 8:02 PM
The self-setup here is too obvious.
This is exactly why man + machine can be much worse than just machine. A strong argument needs to address what we can do as an extremely slow operating, slow learning, and slow adapting species, that machines that improve in ability and efficiency monthly and annually will find they cannot do well or without.
It is clear that we are going through a disruptive change, but COVUD is not comparable. Job loss is likely to have statistics more comparable to the Black Plague. And sensible people are concerned it could get much worse.
I don’t have the answers, but acknowledging and facing the uncertainty head on won’t make things worse.
by andai on 2/14/26, 8:33 AM
That doesn't exactly bolster the author's position. Sure, there's already companies 30 years behind the curve.
But in an increasingly competitive and fast moving economy, "the human is slowing it down by orders of magnitude" doesn't exactly sound like a vote in favor of the human.
by pjmlp on 2/14/26, 9:06 AM
by npodbielski on 2/14/26, 9:18 AM
That is quite a optimistic view that I do not share. The US shitshow with epstein files shows what those with power are actually capable of. The star trek utopia universe is not the world we are building right now collectively. I would expect instead that with robotics and AI combined there will be a lot of more technical jobs for maintaining and building automated systems that serves rich people but not common folks. But still you need knowledge and skill to do that which means you still need to learn and teach those. Which means you still need education and people working there. You still need people that support education sector and technical and maintance sector for AI and robotics. All of them need to eat and have basic needs to be fulfilled. You need agriculture and services and housing and entertainment and dozens of others for that too. So in an essence the author is right but still with AI capable robots I would not expect utopia but somekind of world between blade-runner and alien: you won't be scrolling mindlessly while all you needs are being met but rather trying to save money for the things you dream off while working stupid mindless job you do not like. Which is basically what most of us are doing right now.
So yes nothing will change for most of us but humanity will find a way somehow to make world suck in so many ways by exploiting each other, by stealing from each other, by lying and generally making a world living hell for everyone. Because we do not know any better.
AI won't change that. So as the old saying goes: a lot have to change for everything to stay the same.
by RIMR on 2/13/26, 7:39 PM
That's a weird way of saying 80 million times.
by SirMaster on 2/13/26, 9:31 PM
by simonw on 2/13/26, 7:39 PM
And it's now at 80 million views! https://x.com/mattshumer_/status/2021256989876109403
It appears to have really caught the zeitgeist.
by gverrilla on 2/14/26, 3:34 PM
by jama211 on 2/14/26, 7:13 AM
by davidw on 2/13/26, 11:10 PM
by xyzsparetimexyz on 2/14/26, 12:46 PM
Ordinary people are ALREADY not doing okay.
by throawayonthe on 2/14/26, 5:44 PM
i'm not sure why it would be more amazing in 2016 than in 2023 where it... wasn't very amazing lol
by wooptoo on 2/14/26, 2:06 PM
by ls612 on 2/14/26, 12:20 AM
by nickorlow on 2/13/26, 10:37 PM
... for the 3rd year in a row. Feels like the new 'year of the Linux desktop'
by hunterpayne on 2/13/26, 11:59 PM
Maybe I am wrong, but the history of business on the web says I am right. If you go back and look at why those businesses think they are successful, and if that analysis is correct, then I am.
by dawsmik on 2/14/26, 3:00 PM
Dear software programmers: 90% of your jobs are going away soon. Most of you are on the first step. Those of you who progress through these step the fastest will be most prepared for what is about to come.
by DeathArrow on 2/14/26, 9:50 AM
by everettde on 2/14/26, 2:35 AM
they don't care about the majority losing jobs, or even starving to death so long as they ensure a great future for themselves and the people they, supposedly, care about.
by lukeigel on 2/13/26, 11:53 PM
by mjr00 on 2/13/26, 8:06 PM
I'm not worried about AI job loss in the programming space. I can use Claude to generate ~80% of my code precisely because I have so much experience as a developer. I intuitively know what is a simple mechanical change; that is to say, uninteresting editing of lines of code; as opposed to a major architectural decision. Claude is great at doing uninteresting things. I love it because that leaves me free to do interesting things.
You might think I'm being cocky. But I've been strongly encouraging juniors to use Claude as well, and they're not nearly as successful. When Claude suggests they do something dumb--and it DOES still suggest dumb things--they can't recognize that it's dumb. So they accept the change, then bang their head on the wall as things don't work, and Claude can't figure it out to help them. Then there are bad developers who are really fucked by Claude. The ones who really don't understand anything. They will absolutely get destroyed as Claude leads them down rabbit holes. I have specific anecdotes about this from people I've spoken to. One had Claude delete a critical line in an nginx config for some reason and the dev spent a week trying to resolve it. Another was tasked with doing a simple database maintenance script, and came back two weeks later (after constant prodding by teammates for a status update) with a Claude-written reimplementation of an ORM. That developer just thought they would need another day of churning through Claude tokens to dig themselves out of an existential hole. If you can't think like a developer, these tools won't help you.
I have enough experience to review Claude's output and say "no, this doesn't make sense." Having that experience is critical, especially in what I call the "anti-Goldilocks" zone. If you're doing something precise and small-scoped, Claude will do it without issues. If you try to do something too large ("write a Facebook for dogs app") Claude will ask for more details about what you're trying to do. It's the middle ground where things are a problem: Claude tries to fill in the details when there's something just fundamentally wrong with what it's being asked.
As a concrete example, I was working on a new project and I asked Claude to implement an RPC to update a database table. It did so swimmingly, but also added a "session.commit()" line... just kind in the middle of somewhere. It was right to do so, of course, since the transaction needed to be committed. And if this app was meant to a prototype, sure. But anyone with experience knows that randomly doing commits in the middle of business logic code is a recipe for disaster. The issue, of course, was not having any consistent session management patterns. But a non-developer isn't going to recognize that that's an issue in the first place.
Or a more silly example from the same RPC: the gRPC API didn't include a database key to update. A mistake on my part. So Claude's initial implementation of the update RPC was to look at every row in the table and find ones where the non-edited fields matched. Makes... sense, in a weird roundabout way? But God help whoever ends up vibe coding something like that.
The type of AI fears are coming from things like this in the original article:
> I'll tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. [...] when I test it, it's usually perfect.
Which is great. How many developers are getting paid full-time to make new apps on a regular basis? Most companies, I assume, only build one app. And then they spend years and many millions of dollars working on that app. "Making a new app from scratch" is the easy part! What's hard is adding new features to that app while not breaking others, when your lines of code go from those initial tens of thousands to tens of millions.
There's something to be said about the cheapness of making new software, though. I do think one-off internal tools will become more frequent thanks to AI support. But developers are still going to be the ones driving the AI, as the article says.
by zb3 on 2/14/26, 1:41 AM
There are humans that can't do any mental work that AI can't. Those humans are not useful for mental work and that's what can cause real AI job loss. The bar for being useful for mental work is increasing rapidly..
Jobs that are easy disappear and are replaced with jobs that are no longer as easy, either requiring more mental skills (that many people don't have) or are soul crushing manual jobs that are also getting harder constantly..
So yes, YOU are not worried, because you are privileged here.
by paulsutter on 2/14/26, 1:21 AM
by chaostheory on 2/13/26, 10:45 PM
AI will buy us some time from economic collapse, though on the bright side the environment can recover a bit since human growth was the worse stressor
by jgon on 2/14/26, 1:59 AM
Secondly David Oks attended Masters School for his high school, an elite private boarding school with tuition currently running 72kUSD/year if you stay there the whole time, and 49kUSD/year if you go there just for schooling (https://en.wikipedia.org/wiki/Masters_School). I am going to generally say that people who were able to have 150k+ spent on their high school education (to say nothing of attending Oxford at 30kGBP/year for international student tuition) might just possibly be people who have enough generational family wealth that concerns like job losses seem pretty abstract or not something to really worry about.
It's just another in a long series of articles downplaying the risks of AI job losses, which, when I dig into the author's background, are written by people who have never known any sort of financial precarity in their lives, and are frequently involved AI investment in some manner.
by sunaurus on 2/13/26, 8:26 PM
I’m definitely worried about job loss as a result of the AI bubble bursting, though.
by silexia on 2/14/26, 4:32 AM
by hananova on 2/14/26, 4:27 PM
Once techbros take it too far where an actual significant amount of people face job loss and thus face hardships in housing and feeding themselves, society as a whole is going to wish it nipped AI in the bud when it still could. Knowing techbros though, their moment of introspection, if it ever comes, will come far too late.
To me, actively trying to cause mass job loss in a country with essentially zero social security sounds, actively trying to get as many people in the "nothing to lose" state as possible, sounds genuinely suicidal.
by hndamien on 2/14/26, 12:41 AM
by jillesvangurp on 2/14/26, 6:49 AM
The real world is much more resilient and stubborn. The industrial revolution indeed wiped out a lot of jobs. But it created a lot more new ones. Agriculture and food production no longer is >90% of the economy. The utopian version of that (we all get free food) never happened. The dystyopian version (we'll all starve) didn't happen either. And the Luddite version (we'll all go back to artisanal farming) didn't happen either. What happened is that well fed laborers went to work doing completely different stuff. Subsistence farming now only exists in undeveloped countries and regions in e.g. rural Africa.
The simple reality is that we have 8 billion people probably growing towards 10 billion. These people are going to buy and spend stuff with their income. Whatever that is, is what the economy is and what we collectively value. If AI puts us all out of work, people aren't going to sit on their hands and go back to subsistence farming. They'll fill the time with whatever is is that they can create income with so they can spend it on things that are valuable to them.
This notion of value is what is key. Because if AI lowers the cost of something, it simply becomes cheaper. We need a lot of valuable and scarce resources to power AI. That isn't cheap. So, there's an equilibrium of stuff that is valuable enough to automate with it that people still want to pay for by committing their valuable resources to it. Which as they become scarcer become more valuable and more interesting from an economic point of view. The economy adapts towards activity that facilitates value creation. We're opportunists. It all boils down to what we can do for each other that is valuable and interesting to us. Whatever that is, is where there will be a lot of growth.
I'm in software, I'm not worried about less work. I'm worried about handling the barrage of stuff I don't have time to do that I now need to start worrying about doing. There's no way I'm going to do any of that without AI. It's already generating more work than I can handle. This isn't frivolous stuff that I don't need, it's stuff that's valuable to my company because we can sell it to other companies who need that stuff.
by fogzen on 2/14/26, 4:58 PM
by jmyeet on 2/14/26, 4:53 AM
At no point have worker rights and conditions advanced without being demanded, sometimes violently. The history of maritime safety is written in blood. The robber baron era was peppered with deadly clashes such as the Homestead Strike. As a reminder, we had a private paramilitary force for the wealthy called the Pinkerton Detective Agency (despite the name, they were hired thugs) that at it's peak outnumbered the US Army.
Heck, you can go back to the Black Death when there was a labor shortage to work farms and the English Crown tried to pass laws to cap wages to avoid "gouging" by peasants for their labor.
Automation could be very good for society. It could take away menial jobs so we all benefit. But this won't happen naturally because that's essentially a wealth transfer to the poor and the wealthy just won't stand for that.
No, what's going to happen is that AI specifically and automation in general will be used to suppress labor wages and furhter transfer wealth to the already wealthy. We don't need to replace everyone for this to happen. Displacing just 5% of the workforce has a massive effect on wages. The remaining 95% aren't asking for raises and they're doing more work for the same wages as they pick up whatever the 5% was doing.
We see this exact pattern in the permanent layoff culture in tech right now. At the top you have a handful of AI researchers who command $100M+ pay packages. The vast majority are either happy to still have a job or have been laid off, possibly multiple times, and spend a ton of time going through endless interview rounds for jobs that may not even exist.
This two-tiered society is very much in our near future (IMHO).
In the Depression you had wandering hoboes who were constantly moving, seeking temporary low-paid work and a meal. This situation was so bad we got real socialist change with the New Deal.
2008 killed the entry-level job market and it has yet to recover. That's why you see so many millenails with Masters degrees and a ton of student debt working as baristas. Covid popped the tech labor bubble, something tech companies had been wanting for a long time. Did you not notice that they all started doing layoffs at about the exact same time? Even when they're massively profitable?
So the author isn't worried about job loss? Delusional. We're teetering on the edge of complete societal collapse.
by Bender on 2/14/26, 11:38 AM