by Liwink on 9/1/25, 1:37 AM with 35 comments
by mensetmanusman on 9/1/25, 2:35 AM
The truth is that the most random stuff will set them off. In one case, a patient would find reinforcement on obscure YouTube groups of people predicting the doom of the future.
Maybe the advantage of AI over YouTube psychosis groups is that AI could at least be trained to alert the authorities after enough murder/suicide data is gathered.
by funwares on 9/1/25, 10:48 AM
It looks like fairly standard incomprehensible psychosis messages but it seems notable to me that ChatGPT responds as if they are normal (profound, even) messages.
The 'In Search of AI Psychosis' article and discussions on HN [2] from a few days ago are very relevant here too.
[0] https://www.instagram.com/eriktheviking1987
by judge123 on 9/1/25, 4:35 AM
by ChrisArchitect on 9/1/25, 6:09 AM
by fbhabbed on 9/1/25, 8:14 AM
Someone is mentally ill, and can use AI. Doesn't mean AI is the problem. A mentally ill person can also use a car. Let's ban cars?
by footlose_3815 on 9/1/25, 2:58 AM
by DaveZale on 9/1/25, 1:44 AM
there should be a "black box" warning prominent on every chatbox message from AI, like "This is AI guidance which can potentially result in grave bodily harm to yourself and others."