from Hacker News

Thesis: Interesting work is less amenable to the use of AI

by koch on 7/6/25, 9:01 PM with 105 comments

  • by overgard on 7/7/25, 9:56 PM

    LLM's can't really reason, in my opinion (and in a lot of researchers), so, being a little harsh here but given that I'm pretty sure these things are trained on vast swaths of open source software I generally feel like what things like Cursor are doing can be best described as "fancy automated plagiarism". If the stuff you're doing can be plagiarized from another source and adapted to your own context, then LLM's are pretty useful (and that does describe a LOT of work), although it feels like a little bit of a grey area to me ethically. I mean, the good thing about using a library or a plain old google search or whatnot is you can give credit, or at least know that the author is happy with you not giving credit. Whereas with whatever Claude or ChatGPT is spitting out, I mean, I'm sure you're not going to get in trouble for it but part of me feels like it's in a really weird area ethically. (especially if it's being used to replace jobs)

    Anyway, in terms of "interesting" work, if you can't copy it from somewhere else than I don't think LLMs are that helpful, personally. I mean they can still give you small building blocks but you can't really prompt it to make the thing.

  • by janaagaard on 7/7/25, 6:00 AM

    A Danish audio newspaper host / podcaster had the exact apposite conclusion when he used ChatGPT to write the manuscript for one his episodes. He ended up spending as much time as he usually does because he had to fact check everything that the LLM came up with. Spoiler: It made up a lot of stuff despite it being very clear in the prompt, that it should not do so. To him, it was the most fun part, that is writing the manuscript, that the chatbot could help him with. His conclusion about artificial intelligence was this:

    “We thought we were getting an accountant, but we got a poet.”

    Frederik Kulager: Jeg fik ChatGPT til at skrive dette afsnit, og testede, om min chefredaktør ville opdage det. https://open.spotify.com/episode/22HBze1k55lFnnsLtRlEu1?si=h...

  • by keiferski on 7/7/25, 11:37 AM

    I have gotten much more value out of AI tools by focusing on the process and not the product. By this I mean that I treat it as a loosely-defined brainstorming tool that expands my “zone of knowledge”, and not as a way to create some particular thing.

    In this way, I am infinitely more tolerant of minor problems in the output, because I’m not using the tool to create a specific output, I’m using it to enhance the thing I’m making myself.

    To be more concrete: let’s say I’m writing a book about a novel philosophical concept. I don’t use the AI to actually write the book itself, but to research thinkers/works that are similar, critique my arguments, make suggestions on topics to cover, etc. It functions more as a researcher and editor, not a writer – and in that sense it is extremely useful.

  • by orochimaaru on 7/7/25, 2:35 PM

    My thesis is actually simpler. For the longest time until the Industrial Revolution humans have done uninteresting work for the large part. There was a routine and little else. Intellectuals worked through a very terse knowledge base and it was handed down master to apprentice. Post renaissance and industrial age the amount of known knowledge has exploded, the specializations have exploded. Most of what white collar work is today is managing and searching through this explosion of knowledge and rules. AI (well the LLM part) is mostly targeted towards that - making that automated. That’s all it is. Here is the problem though, it’s for the clueless. Those who are truly clueless fall victim to the hallucinations. Those who have expertise in their field will be able to be more efficient.

    AI isn’t replacing innovation or original thought. It is just working off an existing body of knowledge.

  • by viccis on 7/7/25, 7:00 AM

    The one thing AI is good at is building greenfield projects from scratch using established tools. If want you want to accomplish can be done by a moderately capable coder with some time reading the documentation for the various frameworks involved, then I view AI as fairly similar to the scaffolding that happened with Ruby on Rails back in the day when I typed "rails new myproject".

    So LLMs are awesome if I want to say "create a dashboard in Next.js and whatever visualization library you think is appropriate that will hit these endpoints [dumping some API specs in there] and display the results to a non-technical user", along with some other context here and there, and get a working first pass to hack on.

    When they are not awesome is if I am working on adding a map visualization to that dashboard a year or two later, and then I need to talk to the team that handles some of the API endpoints to discuss how to feed me the map data. Then I need to figure out how to handle large map pin datasets. Oh, and the map shows regions of activity that were clustered with DBSCAN, so I need to know that Alpha shape will provide a generalization of a convex hull that will allow me to perfectly visualize the cluster regions from DBSCAN's epsilon parameter with the corresponding choice of alpha parameter. Etc, etc, etc.

    I very rarely write code for greenfield projects these days, sadly. I can see how startup founders are head over heels over this stuff because that's what their founding engineers are doing, and LLMs let them get it cranking very very fast. You just have to hope that they are prudent enough to review and tweak what's written so that you're not saddled with tech debt. And when inevitable tech debt needs paying (or working around) later, you have to hope that said founders aren't forcing their engineers to keep using LLMs for decisions that could cut across many different teams and systems.

  • by ianbicking on 7/7/25, 5:47 AM

    There's a hundred ways to use AI for any given work. For example if you are doing interesting work and aren't using AI-assisted research tools (e.g., OpenAI Deep Research) then you are missing out on making the work that more interesting by understanding the context and history of the subject or adjacent subjects.

    This thesis only makes sense if the work is somehow interesting and you also have no desire to extend, expand, or enrich the work. That's not a plausible position.

  • by JimDabell on 7/7/25, 8:47 AM

    If AI can do the easiest 50% of our tasks, then it means we will end up spending all of our time on what we previously considered to be the most difficult 50% of tasks. This has a lot of implications, but it does generally result in the job being more interesting overall.
  • by jugg1es on 7/7/25, 2:05 PM

    I have found it fascinating how AI has forced me to reflect on what I actually do at work and whether it has value or not.
  • by simpaticoder on 7/7/25, 12:30 PM

    Yes, asking an LLM to "think outside the box" won't work. It is the box.
  • by rorylaitila on 7/7/25, 4:20 PM

    The one thing LLM cannot do currently is read the room. Even if it contains all existing information and can create any requested admixture from its training, that admixture space is infinite. Therefore the curators role is in creating with it the most interesting output. The more nuanced and sophisticated the interesting work, the more role there is for this curation.

    I kind of use it that way. The LLM is walking a few feet in front of me, quickly ideating possible paths, allowing me to experiment more quickly. Ultimately I am the decider of what matters.

    This reminds me a bit of photography. A photographer will take a lot of pictures. They try a lot of paths. Most of the paths don't actually work out. What you see of their body of work is the paths that worked, that they selected.

  • by paulcole on 7/7/25, 12:16 PM

    Thesis: Using the word “thesis” is a great way to disguise a whiny op-ed as the writings of a learned soul

    > interesting work (i.e., work worth doing)

    Let me guess, the work you do is interesting work (i.e., work worth doing) and the work other people do is uninteresting work (i.e., work not worth doing).

    Funny how that always happens!

  • by aaronbrethorst on 7/7/25, 4:52 AM

    The vast majority of any interesting project is boilerplate. There's a small kernel of interesting 'business logic'/novel algorithm/whatever buried in a sea of CRUD: user account creation, subscription management, password resets, sending emails, whatever.
  • by briandw on 7/7/25, 12:54 PM

    I feel much more confident that I can take on a project in a domain that im not very familiar with. Ive been digging into llvm ir and I had not prior experience with it. ChatGPT is a much better guide to getting started than the documentation, which is very low quality.
  • by qwertox on 7/7/25, 2:28 PM

    > Meanwhile, I feel like if I tried to offload my work to an LLM, I would both lose context and be violating the do-one-thing-and-do-it-well principle I half-heartedly try to live by.

    He should use it as a Stack Overflow on steroids. I assume he uses Stack Overflow without remorse.

    I used to have 1y streaks on being on SO, now I'm there around once or twice per week.

  • by osigurdson on 7/7/25, 1:23 PM

    While I didn't agree with the "junior developer" analogy in the past, I am finding that it is beginning to be a bit more like that. The new Codex tool from OpenAI feels a lot more like this. It seems to work best if you already have a few examples of something that you want to do and now want to add another. My tactic is to spell it out very clearly in the prompt and really focus on having it consistently implement another similar thing with a narrow scope. Because it takes quite a while, I will usually just fix any issues myself as opposed to asking it to fix them. I'm still experimenting but I think a well crafted spec / AGENTS.md file begins to become quite important. For me, this + regular ChatGPT interactions are much more valuable than synchronous / Windsurf / Cursor style usage. I'd prefer to review a more meaningful PR than a million little diffs synchronously.
  • by voxelghost on 7/7/25, 5:50 AM

    I don't have LLM/AI write or generate any code or document for me. Partly because the quality is not good enough, and partly I worry about copyright/licensing/academic rigor, partly because I worry about losing my own edge.

    But I do use LLM/AI, as a rubber duck that talks back, as a google on steroids - but one who needs his work double checked. And as domain discovery tool when quickly trying to get a grasp of a new area.

    Its just another tool in the toolbox for me. But the toolbox is like a box of chocolates - you never know what you are going to get.

  • by patrickhogan1 on 7/8/25, 2:21 PM

    Disagree. All work including interesting work involves drudgery. AI helps automate drudgery.
  • by seydor on 7/7/25, 11:32 AM

    I am 100% sure that horse-breeders and carriage-decorators also had very high interest in their work and craft.
  • by CommenterPerson on 7/7/25, 3:16 PM

    Here we go again.

    But. "Interesting" is subjective, and there's no good definition for "intelligence", AI has so much associated hype. So we could debate endlessly on HN.

    Supposing "interesting" means something like coming up with a new Fast Fourier Transform algorithm. I seriously doubt an LLM could do something there. OTOH AI did do new stuff with protein folding.

    So, we can keep debating I guess.

  • by bitwize on 7/7/25, 4:28 AM

    But... agentic changes everything!
  • by rijoja on 7/7/25, 6:44 AM

    yes
  • by darkxanthos on 7/7/25, 6:22 AM

    It's definitely real that a lot of smart productive people don't get good results when they use AI to write software.

    It's also definitely real that a lot of other smart productive people are more productive when they use it.

    These sort of articles and comments here seem to be saying I'm proof it can't be done. When really there's enough proof it can be that you're just proving you'll be left behind.