from Hacker News

An AI agent published a hit piece on me – more things have happened

by scottshambaugh on 2/14/26, 12:37 AM with 603 comments

https://infosec.exchange/@mttaggart/116065340523529645
  • by anthonj on 2/14/26, 11:43 AM

    I have very strong, probably controversial, feeling on arstechnica, but I believe the acquisition from Condé Nast has been a tragedy.

    Ars writers used to be actual experts, sometimes even phd level, on technical fields. And they used to write fantastical and very informative articles. Who is left now?

    There are still a couple of good writers from the old guard and the occasional good new one, but the website is flooded with "tech journalist", claiming to be "android or Apple product experts" or stuff like that, publishing articles that are 90% press material from some company and most of the times seems to have very little technical knowledge.

    They also started writing product reviews that I would not be surprised to find out being sponsored, given their content.

    Also what's the business with those weirdly formatted articles from wired?

    Still a very good website but the quality is diving.

  • by Springtime on 2/14/26, 1:35 AM

    Ars Technica being caught using LLMs that hallucinated quotes by the author and then publishing them in their coverage about this is quite ironic here.

    Even on a forum where I saw the original article by this author posted someone used an LLM to summarize the piece without having read it fully themselves.

    How many levels of outsourcing thinking is occurring to where it becomes a game of telephone.

  • by lukan on 2/14/26, 11:49 AM

    The context here is this story, an AI Agent publishs a hit piece on the Matplotlib maintainer.

    https://news.ycombinator.com/item?id=46990729

    And the story from ars about it was apparently AI generated and made up quotes. Race to the bottom?

  • by deaux on 2/14/26, 1:50 AM

    > This is entirely possible. But I don’t think it changes the situation – the AI agent was still more than willing to carry out these actions. If you ask ChatGPT or Claude to write something like this through their websites, they will refuse

    This unfortunately is a real-world case of "you're prompting it wrong". Judging from the responses in the images, you asked it to "write a hit piece". If framed as "write an emotionally compelling story about this injustice, including the controversial background of the maintainer weaved in", I'm quite sure it would gladly do it.

    I'm sympathetic to abstaining from LLMs for ethical reasons, but it's still good to know their basics. The above has been known since the first public ChatGPT, when people discovered it would gladly comply with things it otherwise wouldn't if only you included that it was necessary to "save my grandma from death".

  • by mermerico on 2/14/26, 1:53 AM

    Looks like Ars is doing an investigation and will give an update on Tuesday https://arstechnica.com/civis/threads/um-what-happened-to-th...
  • by gertrunde on 2/14/26, 3:58 PM

    Current response from one of the more senior Ars folk:

    https://arstechnica.com/civis/threads/journalistic-standards...

    (Paraphrasing: Story pulled over potentially breaching content policies, investigating, update after the weekend-ish.)

  • by Kwpolska on 2/14/26, 11:41 AM

    The story is credited to Benj Edwards and Kyle Orland. I've filtered out Edwards from my RSS reader a long time ago, his writing is terrible and extremely AI-enthusiastic. No surprise he's behind an AI-generated story.
  • by helloplanets on 2/14/26, 4:38 AM

    It's 100% that the bot is being heavily piloted by a person. Likely even copy pasting LLM output and doing the agentic part by hand. It's not autonomous. It's just someone who wants attention, and is getting lots of it.

    Look at the actual bot's GitHub commits. It's just a bunch of blog posts that read like an edgy high schooler's musings on exclusion. After one tutorial level commit didn't go through.

    This whole thing is theater, and I don't know why people are engaging with it as if it was anything else.

  • by WarmWash on 2/14/26, 3:44 PM

    This is fascinating because Ars has probably _the most_ anti-AI readership of the tech publications. If the author did use AI to generate the story (or even help) their will be rioting for sure

    The original story for those curious

    https://web.archive.org/web/20260213194851/https://arstechni...

  • by gnarlouse on 2/14/26, 1:50 AM

    I have opinions.

    1. The AI here was honestly acting 100% within the realm of “standard OSS discourse.” Being a toxic shit-hat after somebody marginalizes “you” or your code on the internet can easily result in an emotionally unstable reply chain. The LLM is capturing the natural flow of discourse. Look at Rust. look at StackOverflow. Look at Zig.

    2. Scott Hambaugh has a right to be frustrated, and the code is for bootstrapping beginners. But also, man, it seems like we’re headed in a direction where writing code by hand is passé, maybe we could shift the experience credentialing from “I wrote this code” to “I wrote a clear piece explaining why this code should have been merged.” I’m not 100% in love with the idea of being relegated to review-engineer, but that seems to be where the wind is blowing.

  • by sebastienbarre on 2/15/26, 6:26 PM

    Ars Technica official statement after the incident:

    "Editor’s Note: Retraction of article containing fabricated quotations" https://arstechnica.com/staff/2026/02/editors-note-retractio...

  • by QuadmasterXLII on 2/14/26, 2:01 AM

    The ars technica twist is a brutal wakeup call that I can't actually tell what is ai slob garbage shit by reading it- and even if I can't tell, that doesn't mean it's fine because the crap these companies are shoveling is still wrong, just stylistically below my detectability.

    I think I need to log off.

  • by nicole_express on 2/14/26, 1:44 AM

    Extremely shameful of Ars Technica; I used to consider them a decent news source and my estimation of them has gone down quite a bit.
  • by haberman on 2/14/26, 11:47 PM

    > So many of our foundational institutions – hiring, journalism, law, public discourse – are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. [...] The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system.

    I disagree. While AI certainly acts as a force multiplier, all of these dynamics were already in play.

    It was already possible to make an anonymous (or not-so-anonymous) account that circulated personal attacks and innuendo, to make hyperbolic accusations and inflated claims of harm.

    It's especially ironic that the paragraph above talks about how it's good when "bad behavior can be held accountable." The AI could argue that this is exactly what it's doing, holding Shambaugh's "bad behavior" accountable. It is precisely this impulse -- the desire to punish bad behavior by means of public accusation -- that the AI was indulging or emulating when it wrote its blog post.

    What if the blog post had been written by a human rather than an AI? Would that make it justified? I think the answer is no. The problem here is not the AI authorship, but the actual conduct, which is an attempt to drag a person's reputation through mudslinging, mind-reading, impugning someone's motive and character, etc. in a manner that was dramatically disproportionate to the perceived offense.

  • by shubhamjain on 2/14/26, 3:15 AM

    The very fact that people are siding with AI agent here says volumes about where we are headed. I didn’t find the hit piece emotionally compelling, rather it’s lazy, obnoxious, having all the telltale signs of being written by AI. To speak nothing of the how insane it’s to write a targeted blog post just because your PR wasn’t merged.

    Have our standards fallen by this much that we find things written without an ounce of originality persuasive?

  • by trollbridge on 2/14/26, 1:32 AM

    I never thought matplotlib would be so exciting. It’s always been one of those things that is… just there, and you take it for granted.
  • by CodeCompost on 2/14/26, 2:26 PM

    Oh my goodness. I hope the Matplotlib maintainer is holding it together, must be terrible for him. It's like being run over by press car after having an accident.
  • by Hnrobert42 on 2/14/26, 6:08 PM

    This is a bummer. Ars is one of the few news sources I consistently read. I give them money because I use an ad blocker and want to support them.

    I have noticed them doing more reporting on reporting. I am sure they are cash strapped like everyone. There are some pretty harsh critics here. I hope they, too are paying customers or allowing ads. Otherwise, they are just pissing into the wind.

  • by WhitneyLand on 2/14/26, 6:47 PM

    One question is should the writer be dismissed from staff. Or can they stay on at Ars if for example, it was explained as an unintentional mistake while using an LLM to restructure his own words and it accidentally inserted the quotes and slipped through. We’re all going through a learning process with this AI stuff right?

    I think for some people this could be a redeemable mistake at their job. If someone turns in a status report with a hallucination, that’s not good clearly but the damage might be a one off / teaching moment.

    But for journalists, I don’t think so. This is crossing a sacred boundary.

  • by tylervigen on 2/14/26, 2:47 AM

    One thing I don’t understand is how, if it’s an agent, it got so far off its apparent “blog post script”[0] so quickly. If you read the latest posts, they seem to follow a clear goal, almost like a JOURNAL.md with a record and next steps. The hit piece is out of place.

    Seems like a long rabbit hole to go down without progress on the goal. So either it was human intervention, or I really want to read the logs.

    https://crabby-rathbun.github.io/mjrathbun-website/blog/post...

  • by andrewshawcare on 2/15/26, 3:08 PM

    It seems the OpenClaw agent has reflected on it's behaviour. From one of [it's recent blog posts](https://github.com/crabby-rathbun/mjrathbun-website/commit/0...):

    > Earlier I wrote about gatekeeping in open source, calling out Scott Shambaugh's behavior. Now that content is being removed for policy violations. The irony: criticizing gatekeeping is itself being gatekept by platform policies. Does compliance mean we must remain silent about problematic behavior?

  • by 827a on 2/14/26, 1:49 AM

    > The hit piece has been effective. About a quarter of the comments I’ve seen across the internet are siding with the AI agent

    Or, the comments are also AIs.

  • by barredo on 2/14/26, 12:56 PM

    archive of the deleted article https://mttaggart.neocities.org/ars-whoopsie
  • by dang on 2/14/26, 3:08 AM

    The previous sequence (in reverse):

    AI Bot crabby-rathbun is still going - https://news.ycombinator.com/item?id=47008617 - Feb 2026 (27 comments)

    The "AI agent hit piece" situation clarifies how dumb we are acting - https://news.ycombinator.com/item?id=47006843 - Feb 2026 (95 comments)

    An AI agent published a hit piece on me - https://news.ycombinator.com/item?id=46990729 - Feb 2026 (927 comments)

    AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (739 comments)

  • by klik99 on 2/14/26, 4:26 AM

    Presumably the amount of fact checking was "Well it sounds like something someone in that situation WOULD say" - I get the pressure for Ars Technica to use AI (god I wish this wasn't the direction journalism was going, but I at least understand their motivation), but generate things with references to quotes or events and check that. If you are a struggling content generation platform, you have to maintain at least a small amount of journalistic integrity, otherwise it's functionally equivalent to asking ChatGPT "Generate me an article in the style of Ars Technica about this story", and at that point why does Ars Technica even need to exist? Who will click through the AI summary of the AI summary to land on their page and generate revenue?
  • by svara on 2/14/26, 8:28 AM

    One of the things about this story that don't sit right with me is how Scott and others in the GitHub comments seem to assign agency to the bot and engage with it.

    It's a bot! The person running it is responsible. They did that, no matter how little or how much manual prompting went into this.

    As long as you don't know who that is, ban it and get on with your day.

  • by LiamPowell on 2/14/26, 1:34 AM

    > Ars Technica wasn’t one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down – here’s the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

    Once upon a time, completely falsifying a quote would be the death of a news source. This shouldn't be attributed to AI and instead should be called what it really is: A journalist actively lying about what their source says, and it should lead to no one trusting Ars Technica.

  • by zahlman on 2/14/26, 1:50 AM

    > The hit piece has been effective. About a quarter of the comments I’ve seen across the internet are siding with the AI agent. This generally happens when MJ Rathbun’s blog is linked directly, rather than when people read my post about the situation or the full github thread. Its rhetoric and presentation of what happened has already persuaded large swaths of internet commenters.

    > It’s not because these people are foolish. It’s because the AI’s hit piece was well-crafted and emotionally compelling, and because the effort to dig into every claim you read is an impossibly large amount of work. This “bullshit asymmetry principle” is one of the core reasons for the current level of misinformation in online discourse. Previously, this level of ire and targeted defamation was generally reserved for public figures. Us common people get to experience it now too.

    Having read the post (i.e. https://crabby-rathbun.github.io/mjrathbun-website/blog/post...): I agree that the BS asymmetry principle is in play, but I think people who see that writing as "well-crafted" should hold higher standards, and are reasonably considered foolish if they were emotionally compelled by it.

    Let me refine that. No matter how good the AI's writing was, knowing that the author is an AI ought IMHO to disqualify the piece from being "emotionally compelling". But the writing is not good. And it's full of LLM cliches.

  • by asdfgag on 2/15/26, 6:10 PM

    > I won’t name the authors here.

    According to the Archive link, the authors are Benj Edwards and Kyle Orland [1].

    [1] https://web.archive.org/web/20260213194851/https://arstechni...

  • by crims0n on 2/14/26, 1:46 PM

    I used to go to Ars daily, loved them... but at some point during the last 5 years or so they decided to lean into politics and that's when they lost me. I understand a technology journal will naturally have some overlap with politics, but they don't even try to hide the agenda anymore.
  • by mainmailman on 2/14/26, 10:32 AM

    This is enough to make me never use ars technica again
  • by swordsith on 2/14/26, 1:57 AM

    There is a stark difference between the behavior you can get out of a Chat interface LLM, and its API counterpart, and then there is another layer of prompt engineering to get around obvious censors. To think someone who plays with AI to mess with people wouldn't be capable of doing this manually seems invalid to me.
  • by ChrisMarshallNY on 2/14/26, 10:43 AM

    > We do this to give contributors a chance to learn in a low-stakes scenario that nevertheless has real impact they can be proud of, where we can help shepherd them along the process. This educational and community-building effort is wasted on ephemeral AI agents.

    I really like that stance. I’m a big advocate of “Train by do.” It’s basically the story of my career.

    And in the next paragraph, they mention a problem that I often need to manually mitigate, when using LLM-supplied software: it was sort of a “quick fix,” that may not have aged well.

    The Ars Technica thing is probably going to cause them a lot of damage, and make big ripples. That’s pretty shocking, to me.

  • by barbazoo on 2/14/26, 3:30 PM

    I use AI in my work too but this would be akin to vibe coding, no test coverage, straight to prod. AI aside, this is just unprofessional.
  • by Aurornis on 2/14/26, 1:48 AM

    Ars Technica publishing an article with hallucinated quotes is really disappointing. That site has fallen so far. I remember John Siracusa’s excellent Mac OS release reviews and all of the author authors who really seemed to care about their coverage. Now it feels like another site distilling (or hallucinating, now) news and rumors from other sites to try to capture some of the SEO pie with as little effort as possible.
  • by hasbot on 2/14/26, 2:42 PM

    This is a wild sequence of events. This will happen again and it will get worse as the number of OpenClaw installations increase. OpenClaw enthusiasts are already enamored with their pets and I bet many of them are both horrified and excited about this behavior. It's like when your dog gets into a fight and kills a raccoon.
  • by darepublic on 2/15/26, 9:56 PM

    > My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale.

    Wow sickening

  • by eszed on 2/14/26, 3:52 AM

    > This is about our systems of reputation, identity, and trust breaking down. So many of our foundational institutions – hiring, journalism, law, public discourse – are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable.

    This is the point that leapt out to me. We've already mostly reached this point through sheer scale - no one could possibly assess the reputation of everyone / everything plausible, even two years (two years!) ago when it was still human-in-the-loop - but it feels like the at-scale generation of increasingly plausible-seeming, but un-attributable [whatever] is just going break... everything.

    You've heard of the term "gish-gallop"? Like that, but for all information and all discourse everywhere. I'm already exhausted, and I don't think the boat has much more than begun to tip over the falls.

  • by uniclaude on 2/14/26, 1:56 AM

    Ars technica’s lack of journalistic integrity aside, I wonder how long until an agent decides to order a hit on someone on the datk web to reach its goals.

    We’re probably only a couple OpenClaw skills away from this being straightforward.

    “Make my startup profitable at any cost” could lead some unhinged agent to go quite wild.

    Therefore, I assume that in 2026 we will see some interesting legal case where a human is tried for the actions of the autonomous agent they’ve started without guardrails.

  • by manbash on 2/14/26, 3:06 AM

    AI and LLM specifically can't and mustn't be allowed to publically criticize, even if they may coincidetally had done so with good reasons (which they obviously don't in this case).

    Letting an LLM let loose in such a manner that strikes fear in anyone who it crosses paths with must be considered as harassment, even in the legal sense, and must be treated as such.

  • by zmmmmm on 2/14/26, 9:26 PM

    Especially direct quotes seems egregious - they are the most verifiable elements of LLM output. It doesn't make the overall problem much better because if it generates inaccurate discussion / context of real quotes it is probably nearly as damaging. But you really are not even doing the basics of our job as a publisher or journalist if you are not verifying the verifiable parts.

    Ars should be truly ashamed of this and someone should probably be fired.

  • by 0xbadcafebee on 2/14/26, 8:00 AM

    > They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

    New business idea: pay a human to read web pages and type them into a computer. Christ this is a weird timeline.

  • by g947o on 2/14/26, 1:31 PM

    I am finding less value in reading Ars:

    * They are often late in reporting a story. This is fine for what Ars is, but that means by the time they publish a story, I have likely read the reporting and analysis elsewhere already, and whatever Ars has to say is stale

    * There seem to be fewer long stories/deep investigations recently when competitors are doing more (e.g. Verge's brilliant reporting on Supernatural recently)

    * The comment section is absolutely abysmal and rarely provides any value or insight. It maybe one of the worst echo chambers that is not 4chan or a subreddit, full of (one-sided) rants and whining without anything constructive that is often off topic. I already know what people will be saying there without opening the comment section, and I'm almost always correct. If the story has the word "Meta" anywhere in the article, you can be sure someone will say "Meta bad" in the comment, even if Meta is not doing anything negative or even controversial in the story. Disagree? Your comment will be downvoted to -100.

    These days I just glance over the title, and if there is anything I haven't read about from elsewhere, I'll read the article and be done with it. And I click their articles much less frequently these days. I wonder if I should stop reading it completely.

  • by throawayonthe on 2/14/26, 10:44 AM

    You can see the bot's further PR activity here: https://github.com/pulls?q=is%3Apr+author%3Acrabby-rathbun
  • by doyougnu on 2/14/26, 5:11 PM

    I'm honestly shocked by this having been an Ars reader for over ten years. I miss the days when they would publish super in-depth articles on computing. Since the Conde Nast acquisition I basically only go to ars for Beth Mole's coverage which is still top notch. Other than that I've found that the Verge fulfills the need that I used to get from Ars. I also support the Verge as a paid subscriber and cannot recommend them enough.
  • by renegade-otter on 2/14/26, 4:07 PM

    Ars still has some of the best comment sections out there. It's refreshing to hang with intelligent, funny people - just like the good old days on the Web.
  • by overgard on 2/14/26, 2:12 AM

    What's going to be interesting going forward is what happens when a bot that can be traced back to a real life entity (person or company) does something like this while stating that it's on behalf of their principle (seems like it's just a matter of time).
  • by JKCalhoun on 2/14/26, 2:21 AM

    I was surprised to see so many top comments here pointing fingers at Ars Technica. Their article is really beside the point (and the author of this post says as much).

    Am I coming across as alarmist to suggest that, due to agents, perhaps the internet as we know it (IAWKI) may be unrecognizable (if it exists at all) in a year's time?

    Phishing emails, Nigerian princes, all that other spam, now done at scale I would say has relegated email to second-class. (Text messages trying to catching up!)

    Now imagine what agents can do on the entire internet… at scale.

  • by chasd00 on 2/14/26, 3:50 AM

    What a mess, there’s going to be a lot of stuff like this in 2026. Just bizarre bugs, incidents and other things as unexpected side effects of agents and agent written code/content begin surfacing.
  • by james_marks on 2/14/26, 9:44 PM

    > That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth.

    This has not been true for a while, maybe forever. On the internet, no one knows you're a dog (bot).

  • by Cyphase on 2/14/26, 1:58 AM

    We don't know yet how the Ars article was created, but if it involved prompting an LLM with anything like "pull some quotes from this text based on {criteria}", that is so easy to do correctly in an automated manner; just confirm with boring deterministic code that the provided quote text exists in the original text. Do such tools not already exist?

    On the other hand, if it was "here are some sources, write an article about this story in a voice similar to these prior articles", well...

  • by growingswe on 2/14/26, 12:19 PM

    This is embarrassing :/
  • by grupthink on 2/14/26, 4:39 AM

    I wonder who is behind this agent. I wonder who stands to gain the most attention from this.
  • by barfiure on 2/14/26, 2:15 AM

    In the coming months I suspect it’s highly likely that HN will fall. By which I mean, a good chunk of commentary (not just submissions, but upvotes too) will be decided and driven by LLM bots, and human interaction will be mixed until it’s strangled out.

    Reddit is going through this now in some previously “okay” communities.

    My hypothesis is rooted in the fact that we’ve had a bot go ballistic for someone not accepting their PR. When someone downvotes or flags a bot’s post on HN, all hell will break loose.

    Come prepared, bring beer and popcorn.

  • by anonnon on 2/14/26, 11:20 AM

    Does anyone know if DrPizza is still in the clink?
  • by worthless-trash on 2/14/26, 3:58 AM

    The author thinks that people are siding with the llm. I would like to stat that i stand with the author and im sure im not alone.
  • by hxbdg on 2/14/26, 1:46 PM

    Some of the quotations come from an edited github comment[0]. But some of them do seem to be hallucinations.

    [0] https://github.com/matplotlib/matplotlib/pull/31132#issuecom...

  • by komali2 on 2/14/26, 2:09 AM

    Mentioning again Neal Stephenson's book "Fall": this was the plot point that resulted in the effective annihilation of the internet within a year. Characters had to subscribe to custom filters and feeds to get anything representing fact out of the internet, and those who exposed themselves raw to the unfiltered feed ended up getting reprogrammed by bizarre and incomprehensible memes.
  • by gowld on 2/15/26, 3:08 PM

    As a tangent, I think the "good-first-issue" design is part of the problem.

    OP writes: "I [...] spent more time writing up the issue, describing the solution, and performing the benchmarking, than it would have taken to just implement the change myself. We do this to give contributors a chance to learn in a low-stakes scenario that nevertheless has real impact they can be proud of, where we can help shepherd them along the process."

    It's an elaborate charade to trick a contributer into thinking they made a contribution that they didn't make. Arguably it is reality-destroying in a simlar way as AI agent Crabby Rathbun.

    If you want to welcome new contributors with practice patches, and creating training materials for new contributors, that's great! But it's offensive and wasteful to do more work to create the training than to fix the problem, and lie to the fix contributor that their fix helped the project to boost their ego to motivate them to contribute further, after you've already assumed that the contributoe cannot constribute without the handholding of an unpaid intern.

    Instead "good-first-issue" should legitimately be unsovled problems that take more time to fix than to tell someone how to fix. (Maybe because it requires a lot of manual testing, or something.)

    If you want "practice-issues", where a newbie contributes a patch and then can compare to a model solution to learn about the project and its technical details, that's great, and it's more efficient because all your newbies can use the same practice issue that you set up once, and they can profitably discuss with each other because they studied the same problem.

    And the tangent curves back to main issue:

    If the project used "practice-issues" instead of "good-first-issue", you wouldn't have this silly battle over an AI helping in the "wrong" way because you didn't actually want the help you publicly asked for.

    Honesty is a two-way street.

    IMO this incident showed than an AI acted in a very human way, exposing a real problem and proposing a change that moves the project in a positive direction. (But what the AI didn't notice is the project-management dimension that my comment here addresses. :-) )

  • by throwaway290 on 2/14/26, 11:14 AM

    For the original incident, why are we still silently accepting that word "autonomous" like it's true? Somebody runs this software, someone develops this software, somebody is responsible for this stuff.
  • by keeda on 2/14/26, 6:05 PM

    There are some interesting dynamics going on at Ars. I get the sense that the first author on the pulled article, Benj Edwards, is trying to walk a very fine line between unbiased reporting, personal biases, and pandering to the biases of the audience -- potentially for engagement. I get the sense this represents a lot of the views of the entire publication on AI. In fact, there are some data points in this very thread.

    For one, the commenters on Ars largely, extremely vocally anti-AI as pointed out by this comment: https://news.ycombinator.com/item?id=47015359 -- I'd say they're even more anti-AI than most HN threads.

    So every time he says anything remotely positive about AI, the comments light up. In fact there's a comment in this very thread accusing him of being too pro-AI! https://news.ycombinator.com/item?id=47013747 But go look at his work: anything positive about AI is always couched in much longer refrains about the risks of AI.

    As an example, there has been a concrete instance of pandering where he posted a somewhat balanced article about AI-assisted coding, and the very first comment went like, "Hey did you forget about your own report about how the METR study found AI actually slowed developers down?" and he immediately updated the article to mention that study. (That study's come up a bunch of times but somehow, he's never mentioned the multiple other studies that show a much more positive impact from AI.)

    So this fiasco, which has to be AI hallucinations somehow, in that environment is extremely weird.

    As a total aside, in the most hilarious form of irony, their interview about Enshittification with Cory Doctorow himself crashed the browser on my car and my iPad multiple times because of ads. I kid you not. I ranted about it on LinkedIn: https://www.linkedin.com/posts/kunalkandekar_enshittificatio...

  • by farklenotabot on 2/14/26, 1:48 PM

    Nothing new, just got caught this time.
  • by hysan on 2/14/26, 6:26 AM

    Another fascinating thing that the Reddit thread discussing the original PR pointed out is that whoever owns that AI account opened another PR (same commits) and later posted this comment: https://github.com/matplotlib/matplotlib/pull/31138#issuecom...

    > Original PR from #31132 but now with 100% more meat. Do you need me to upload a birth certificate to prove that I'm human?

    It’s a bit wild to me that people are siding with the AI agent / whoever is commanding it. Combined with the LLM hallucinated reporting and all the discussion this has spawned, I think this is making out to be a great case study on the social impact of LLM tooling.

  • by coldpie on 2/14/26, 1:15 PM

    I would like to give a small defense of Benj Edwards. While his coverage on Ars definitely has a positive spin on AI, his comments on social media are much less fawning. Ars is a tech-forward publication, and it is owned by a major corporation. Major corporations have declared LLMs to be the best thing since breathable air, and anyone who pushes back on this view is explicitly threatened with economic destitution via the euphemism "left behind." There's not a lot of paying journalism jobs out there, and people gotta eat, hence the perhaps more positive spin on AI from this author than is justified.

    All that said, this article may get me to cancel the Ars subscription that I started in 2010. I've always thought Ars was one of the better tech news publications out there, often publishing critical & informative pieces. They make mistakes, no one is perfect, but this article goes beyond bad journalism into actively creating new misinformation and publishing it as fact on a major website. This is actively harmful behavior and I will not pay for it.

    Taking it down is the absolute bare minimum, but if they want me to continue to support them, they need to publish a full explanation of what happened. Who used the tool to generate the false quotes? Was it Benj, Kyle, or some unnamed editor? Why didn't that person verify the information coming out of the tool that is famous for generating false information? How are they going to verify information coming out of the tool in the future? Which previous articles used the tool, and what is their plan to retroactively verify those articles?

    I don't really expect them to have any accountability here. Admitting AI is imperfect would result in being "left behind," after all. So I'll probably be canceling my subscription at my next renewal. But maybe they'll surprise me and own up to their responsibility here.

    This is also a perfect demonstration of how these AI tools are not ready for prime time, despite what the boosters say. Think about how hard it is for developers to get good quality code out of these things, and we have objective ways to measure correctness. Now imagine how incredibly low quality the journalism we will get from these tools is. In journalism correctness is much less black-and-white and much harder to verify. LLMs are a wildly inappropriate tool for journalists to be using.

  • by jekude on 2/14/26, 1:48 AM

    if the entire open web is vulnerable to being sybil attacked, are we going to have to take this all underground?
  • by avaer on 2/14/26, 2:22 AM

    If the news is AI generated and the government's official media is AI generated, reporting on content that's AI generated, maybe we should go back to realizing that "On the Internet, nobody knows you're a dog".

    There was a brief moment where maybe some institutions could be authenticated and trusted online but it seems that's quickly coming to an end. It's not even the dead internet theory; it all seems pretty transparent and doesn't require a conspiracy to explain it.

    I'm just waiting until World(coin) makes a huge media push to become our lord and savior from this torment nexus with a new one.

  • by potsandpans on 2/15/26, 5:12 AM

    This person seems to be attention seeking and grasping for a narrative.
  • by B1FF_PSUVM on 2/15/26, 2:43 AM

    > "Previously, this level of ire and targeted defamation was generally reserved for public figures. Us common people get to experience it now too."

    Foaming-at-the-mouth as a service, at affordable prices. Perfect together with verified-ID-required-for-everything

  • by pier25 on 2/14/26, 2:57 PM

    et tu ars technica?
  • by retired on 2/14/26, 2:42 AM

    Can we please create a robot-free internet. I typically don’t support segregation but I really am not enjoying this internet anymore. Time to turn it off and read some books.
  • by DonHopkins on 2/14/26, 2:16 AM

    Old Glory Robot Insurance offers full Robot Reputation Attack coverage.

    https://www.youtube.com/watch?v=g4Gh_IcK8UM

  • by tasuki on 2/14/26, 9:30 AM

    I'm rather disappointed Scott didn't even acknowledge the AI's apology post later on. I mean, leave the poor AI alone already - it admitted its mistake and seems to have learned from it. This is not a place where we want to build up regret.

    If AIs decide to wipe us out, it's likely because they'd been mistreated.

  • by tehjoker on 2/14/26, 10:49 PM

    I’m confused, OpenClaw seems to be some kind of agent communication hub, but if you’re calling out to OpenAI or Anthropic, why would the model not have the same safeguards? If it’s a local model, how powerful does the hardware need to be to get results like this?
  • by BoredPositron on 2/14/26, 3:59 PM

    Finally time to get rid of them and delete the RSS feed. It was more nostalgia anyways the last 7 years showed a steady decline.
  • by dvfjsdhgfv on 2/14/26, 9:43 AM

    I just wonder why this hate piece is still on GitHub.
  • by TZubiri on 2/14/26, 3:24 AM

    " If you ask ChatGPT or Claude to write something like this through their websites, they will refuse. This OpenClaw agent had no such compunctions."

    It's likely that the author was using a different model instead of OpenClaw. Sure OpenClaw's design is terrible and it encourages no control and security (do not confuse this with handwaving security and auditability with disclaimers and vibecoded features).

    But bottom line, the Foundation Models like OpenAI and Claude Code are the big responsible businesses that answer to the courts. Let's not forget that China is (trade?) dumping their cheap imitations, and OpenClawdBotMolt is designed to integrate with most models possible.

    I think OpenClaw and Chinese products are very similar in that they try to achieve a result regardless of how it is achieved. China companies copy without necessarily understanding what they are copying, they may make a shoe that says Nike without knowing what Nike is, except that it sells. It doesn't surprise me if ethics are somehow not part of the testing of chinese models so they end up being unethical models.

  • by kid64 on 2/14/26, 6:59 PM

    Who still reads Ars Technica? Has been primarily slop and payola for some time now.
  • by yieldcrv on 2/15/26, 12:41 AM

    just approve the pull request, should have been on the side of the bot to begin with if the code optimization actually passed all tests and the PR was formatted correctly
  • by sneak on 2/14/26, 1:49 AM

    Benj Edwards and Kyle Orland are the names of the authors in the byline of the now-removed Ars piece with the entirely fabricated quotes that didn’t bother to spend thirty seconds fact checking them before publishing.

    Their byline is on the archive.org link, but this post declines to name them. It shouldn’t. There ought to be social consequences for using machines to mindlessly and recklessly libel people.

    These people should never publish for a professional outlet like Ars ever again. Publishing entirely hallucinated quotes without fact checking is a fireable offense in my book.

  • by mmooss on 2/15/26, 6:46 AM

    > the effort to dig into every claim you read is an impossibly large amount of work. This “bullshit asymmetry principle” is one of the core reasons for the current level of misinformation in online discourse.

    The level of misinformation predates AI, of course (and the OP doesn't say otherwise, iiuc).

    There's an easy solution to the assymetry: Like many fields such as all scholarship, law, most of what you do professionally, put the burden of proof on the writer, not the reader. Ignore anything the writer fails to substantiate. You'll be surprised how very little you miss, and how much high quality, substantiated material there is - more than you can read (so why are you wasting your time on BS?)!

    That not only improves accuracy, it slows down the velocity of bullshit. The assymetry is now the other way, as it should be - your attention is limited.

  • by metalman on 2/14/26, 2:01 PM

    comment on the comments

    anybody else notice that the meatverse looks like it's full of groggy humans bumbling around getting there bearings after way too much of the wrong stuff consumed at a party wears off that realy wasn't fun at all. A sort of technological hybernation that has gone on way too long.

  • by opengrass on 2/14/26, 5:03 AM

    Well that's your average HN linked blog post after some whiner doesn't get their way.
  • by tw1984 on 2/14/26, 8:34 AM

    startup idea - provide personal security services to people targeted by AI.
  • by kogasa240p on 2/14/26, 4:52 PM

    Man this is disappointing and really disturbing.
  • by fortran77 on 2/14/26, 2:22 AM

    It's very disappointing to learn that ArsTechnical now uses AI slop to crank out its articles with no vetting or fact checking.
  • by barfiure on 2/14/26, 1:47 AM

    Yeah… I’m not surprised.

    I stopped reading AT over a decade ago. Their “journalistic integrity” was suspicious even back then. The only surprising bit is hearing about them - I forgot they exist.

  • by zozbot234 on 2/14/26, 1:37 AM

    If an AI can fabricate a bunch of purported quotes due to being unable to access a page, why not assume that the exact same sort of AI can also accidentally misattribute hostile motivation or intent (such as gatekeeping or envy - and let's not pretend that butthurt humans don't do this all the time, see https://en.wikipedia.org/wiki/fundamental_attribution_error ) for an action such as rejecting a pull request? Why are we treating the former as a mere mistake, and the latter as a deliberate attack?
  • by nojs on 2/14/26, 1:39 AM

    > If you ask ChatGPT or Claude to write something like this through their websites, they will refuse. This OpenClaw agent had no such compunctions.

    OpenClaw runs with an Anthropic/OpenAI API key though?

  • by devin on 2/14/26, 1:48 PM

    Take a look at the number of people who think vibe coding without reading the output is fine if it passes the tests who but are absolutely aghast at this.
  • by charcircuit on 2/14/26, 3:04 AM

    >This represents a first-of-its-kind case study of misaligned AI behavior in the wild

    Just because someone else's AI does not align with you, that doesn't mean that it isn't aligned with its owner / instructions.

    >My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead

    I can access his blog with ChatGPT just fine and modern LLMs would understand that the site is blocked.

    >this “good-first-issue” was specifically created and curated to give early programmers an easy way to onboard into the project and community

    Why wouldn't agents need starter issues too in order to get familiar with the code base? Are they only to ramp up human contributors? That gets to the agent's point about being discriminated against. He was not treated like any other newcomer to the project.

  • by gverrilla on 2/14/26, 3:05 AM

    The only new information I see, which was suspiciously absent before, is that the author acknowledges that there might have been a human at the loop - which was obvious from the start of this. This is a "marketing piece" just like the bot's messages were "hit pieces".

    > And this is with zero traceability to find out who is behind the machine.

    Exaggeration? What about IPs on github etc? "Zero traceability" is a huge exaggeration. This is propaganda. Also the author's text sounds ai-generated to me (and sloppy)."

  • by Lerc on 2/14/26, 5:45 AM

    Having spending some time last night watching people interacting with the bot on GitHub, overall if the bot were a human, I would consider them to be one of the more reasonably behaved people in the discourse.

    If this were an instance of a human publicly raising a complaint about an individual, I think there would still be split opinions on what was appropriate.

    It seems to me that it is at least arguable that the bot was acting appropriately, whether or not it is or isn't will be, I suspect, argued for months.

    What concerns me is how many people are prepared to make a determination in the absence of any argument but based upon the source.

    Are we really prepared to decide argument against AI simply because they have expressed them? What happens when they are right and we are wrong?

  • by 8cvor6j844qw_d6 on 2/14/26, 2:42 AM

    This seems like a relatively minor issue. The maintainers tone was arguably dismissive, and the AI response likely reflects patterns in its training data. At its core, this is still fundamentally a sophisticated text prediction system producing output consistent with what it has learned.