from Hacker News

First Murder-Suicide Case Associated with AI Psychosis

by Liwink on 9/1/25, 1:37 AM with 35 comments

  • by mensetmanusman on 9/1/25, 2:35 AM

    Having dealt with near and distant family psychosis on more than one occasion…

    The truth is that the most random stuff will set them off. In one case, a patient would find reinforcement on obscure YouTube groups of people predicting the doom of the future.

    Maybe the advantage of AI over YouTube psychosis groups is that AI could at least be trained to alert the authorities after enough murder/suicide data is gathered.

  • by funwares on 9/1/25, 10:48 AM

    Both his Instagram [0] and YouTube pages [1] are still up. He had a habit of uploading screen recordings of his chats with ChatGPT.

    It looks like fairly standard incomprehensible psychosis messages but it seems notable to me that ChatGPT responds as if they are normal (profound, even) messages.

    The 'In Search of AI Psychosis' article and discussions on HN [2] from a few days ago are very relevant here too.

    [0] https://www.instagram.com/eriktheviking1987

    [1] https://youtube.com/@steinsoelberg2617

    [2] https://news.ycombinator.com/item?id=45027072

  • by judge123 on 9/1/25, 4:35 AM

    This is horrifying, but I feel like we're focusing on the wrong thing. The AI wasn't the cause; it was a horrifying amplifier. The real tragedy here is that a man was so isolated he turned to a chatbot for validation in the first place.
  • by ChrisArchitect on 9/1/25, 6:09 AM

  • by fbhabbed on 9/1/25, 8:14 AM

    This is not Cyberpunk 2077 and "AI psychosis" is trash, just like the article.

    Someone is mentally ill, and can use AI. Doesn't mean AI is the problem. A mentally ill person can also use a car. Let's ban cars?

  • by footlose_3815 on 9/1/25, 2:58 AM

    A tech industry veteran? You would think they could realize it's a disingenuous exchange between him and the AI, but nobody is immune to mental illness.
  • by DaveZale on 9/1/25, 1:44 AM

    why is this stuff legal?

    there should be a "black box" warning prominent on every chatbox message from AI, like "This is AI guidance which can potentially result in grave bodily harm to yourself and others."