from Hacker News

The AI Replaces Services Myth

by warthog on 7/17/25, 6:51 PM with 53 comments

  • by kelseyfrog on 7/17/25, 8:42 PM

    > You have to start from how the reality works and then derive your work.

    Every philosopher eventually came to the same realization: We don't have access to the world as it is. We have access to a model of the world as it predicts and is predicted by our senses. In so far as there is a correlation between the two in whatever fidelity we can muster, we are fated to direct access to a simulacrum.

    For the most part they agree, but we have a serious flaw - our model inevitably influences our interpretation of our senses. This sometimes gets us into trouble when aspects of our model become self-reinforcing by framing sense input in ways that amplify the part of the model that confers the frame. For example, you live in a very different world if you search for and find confirmation for cynicism.

    Arguing over metaphysical ontology is exemplified by kids fighting about which food (their favorite) is the best. It confuses subjectivity and objectivity. It might appear radical, but all frames are subjective even ones shared by the majority of others.

    Sure, Schopenhauer's philosophy is the mirror of his own nature, but there is no escape hatch. There is no externality - no objective perch to rest on, even ones shared by others. That's not to say that all subjectivities are equally useful for navigating the world. Some models work better than others for prediction, control, and survival. But we should be clear that useful does not equate with truth, as all models are wrong, some are useful.

    JC, I read the rest. The author doesn’t seem to grasp how profit actually works. Price and value are not welded together: you can sell something for more or less than the value it generates. Using his own example, if the AI and the human salesperson do the same work, their value is identical, independent of what each costs or commands in the market.

    He seems wedded to a kind of market value realism, and from this shaky premise, he arrives at some bizarre conclusions.

  • by neuroelectron on 7/17/25, 10:07 PM

    I have yet to see LLMs solve any new problems. I think it's pretty clear a lot of the bouncing ball programming demos are specifically trained on to be demoed at a marketing/advertising thing. Asking AI the most basic logical question about a random video game like, what element synergies with ice spike shield in Dragon Cave Masters and it will make up some nonsense despite it being something you can look up on gamefaqs.org. Now I know it knows the game I'm talking about but in the latent space it's just another set of dimensions that flavor likely next token patterns.

    Sure, if you train an LLM enough on gamefaqs.org, it will be able to answer my question as accurately as an SQL query, and there's a lot of jobs that are just looking up answers that already exist, but these systems are never going to replace engineering teams. Now, I definitely have seen some novel ideas come out of LLMs, especially in earlier models like GPT-3, where hallucinations were more common and prompts weren't normalized into templates, but now we have "mixtures" of "experts" that really keep LLMs from being general intelligences.

  • by jsnk on 7/17/25, 7:56 PM

    """ Not because AI can't do the work. It can.

    But because the economics don't translate the way VCs claim. When you replace a $50,000 employee with AI, you don't capture $50,000 in software revenue. You capture $5,000 if you're lucky. """

    So you are saying, AI does replace labour.

  • by Quarrelsome on 7/17/25, 8:30 PM

    Do execs really dream of entirely removing their engineering departments? If this happens then I would expect some seriously large companies to fail in the future. For every good idea an exec has, they have X bad ideas that will cause problems and their engineers save them from those. Conversely an entirely AI engineering team will say "yes sir, right on it" to every request.
  • by tuatoru on 7/17/25, 7:52 PM

    The title is slightly misleading.

    What the article is really about is the idea that all of the money that is now paid in wages will somehow be paid to AI companies as AI replaces humans. That idea being muddle-headed.

    It points out that businesses think of AI as software, and will pay software-level money for AI, not wage-level money. It finishes with the rhetorical question, are you paying $100k/year to an AI company for each coder you no longer need?

  • by AkshatM on 7/17/25, 8:38 PM

    > Need an example? Good. Coding.

    > You must be paying your software engineers around $100,000 yearly.

    > Now that vibecoding is out there, when was the last time you committed to pay $100,000 to Lovable or Replit or Claude?

    I think the author is attacking a bit of a strawman. Yes, people won't pay human prices for AI services.

    But the opportunity is in democratization - becoming the dominant platform - and bundling - taking over more and more of the lifecycle.

    Your customers individually spend less, but you get more customers, and each customer spends a little extra for better results.

    To respond to the analogy: not everyone had $100,000 to build their SaaS before. Now everyone who has a $100 budget can buy Lovable, Replit and Claude subbscriptions. You only need 1,000 customers to match what you made before.

  • by i_niks_86 on 7/18/25, 8:46 AM

    VCs forget that pricing power follows perceived category. If customers bucket AI into SaaS, they’ll price it like Slack or Zoom, not like payroll. You might replace $50K in labor, but you’ll only ever capture 10% of that unless your AI walks into meetings and shakes hands.
  • by reportgunner on 7/18/25, 10:57 AM

    It's just one of those consequences of "I don't care about the specifics just put it in production" that ends up in "why didn't you tell me that I completely misunderstood"
  • by rafaepta on 7/22/25, 12:58 PM

    This article misses the point. It is not about AI replacing workers, but about AI bringing more ROI. Can an AI convert twice as many customers as a $4k salesperson? It is reasonable to say that in a B2C setting, YES. I've seen that. Better SLA, fast responses during weekends, better adherence to existing playbooks, mapping out objections that are not in the playbook, and suggesting updates for the same prompt. In one week, the playbook evolved, and today we are converting more customers than the sales team. Does it capture the value of the $4k usd sales person ? If the ROI is superior, yes. Will I pay for it? That is a different story (we developed this ourselves).
  • by satisfice on 7/18/25, 4:08 AM

    The article is about how AI does replace services while also substantially devaluing them.

    I see no evidence that it replaces anything interesting, because AI injects random slop into whatever it does. This is a largely delusional article.