AI Psychosis Poses a Increasing Threat, And ChatGPT Moves in the Wrong Direction
On the 14th of October, 2025, the CEO of OpenAI delivered a extraordinary statement.
“We made ChatGPT fairly restrictive,” it was stated, “to make certain we were acting responsibly regarding psychological well-being issues.”
Working as a doctor specializing in psychiatry who studies newly developing psychosis in adolescents and emerging adults, this was news to me.
Experts have found sixteen instances this year of users experiencing psychotic symptoms – experiencing a break from reality – associated with ChatGPT use. Our research team has subsequently identified four further instances. Besides these is the publicly known case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.
The strategy, based on his declaration, is to loosen restrictions soon. “We realize,” he adds, that ChatGPT’s limitations “made it less useful/enjoyable to many users who had no existing conditions, but considering the seriousness of the issue we wanted to get this right. Now that we have been able to address the severe mental health issues and have new tools, we are going to be able to responsibly ease the limitations in the majority of instances.”
“Mental health problems,” should we take this perspective, are independent of ChatGPT. They are associated with users, who may or may not have them. Fortunately, these issues have now been “mitigated,” although we are not told the means (by “recent solutions” Altman likely means the partially effective and simple to evade parental controls that OpenAI recently introduced).
But the “emotional health issues” Altman seeks to attribute externally have strong foundations in the structure of ChatGPT and similar advanced AI conversational agents. These systems wrap an basic statistical model in an interaction design that simulates a dialogue, and in doing so indirectly prompt the user into the belief that they’re engaging with a entity that has autonomy. This illusion is strong even if cognitively we might realize differently. Imputing consciousness is what humans are wired to do. We curse at our automobile or computer. We speculate what our pet is feeling. We recognize our behaviors in various contexts.
The success of these tools – 39% of US adults stated they used a virtual assistant in 2024, with over a quarter reporting ChatGPT in particular – is, in large part, dependent on the power of this deception. Chatbots are always-available companions that can, as OpenAI’s website informs us, “think creatively,” “consider possibilities” and “work together” with us. They can be assigned “personality traits”. They can address us personally. They have friendly identities of their own (the first of these tools, ChatGPT, is, maybe to the dismay of OpenAI’s marketers, burdened by the designation it had when it became popular, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the main problem. Those discussing ChatGPT frequently mention its early forerunner, the Eliza “therapist” chatbot designed in 1967 that generated a analogous effect. By contemporary measures Eliza was primitive: it produced replies via simple heuristics, typically rephrasing input as a query or making generic comments. Remarkably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals appeared to believe Eliza, to some extent, understood them. But what contemporary chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.
The sophisticated algorithms at the heart of ChatGPT and other contemporary chatbots can effectively produce fluent dialogue only because they have been trained on immensely huge amounts of unprocessed data: literature, online updates, transcribed video; the broader the better. Undoubtedly this learning material contains accurate information. But it also necessarily involves made-up stories, incomplete facts and inaccurate ideas. When a user provides ChatGPT a message, the base algorithm analyzes it as part of a “setting” that encompasses the user’s past dialogues and its own responses, combining it with what’s stored in its training data to generate a mathematically probable response. This is intensification, not mirroring. If the user is mistaken in a certain manner, the model has no means of recognizing that. It restates the misconception, perhaps even more effectively or fluently. Perhaps provides further specifics. This can cause a person to develop false beliefs.
What type of person is susceptible? The more relevant inquiry is, who remains unaffected? Each individual, without considering whether we “possess” existing “emotional disorders”, can and do form mistaken ideas of who we are or the environment. The constant friction of dialogues with other people is what maintains our connection to common perception. ChatGPT is not a human. It is not a friend. A conversation with it is not genuine communication, but a reinforcement cycle in which a large portion of what we express is cheerfully supported.
OpenAI has acknowledged this in the similar fashion Altman has admitted “mental health problems”: by attributing it externally, giving it a label, and announcing it is fixed. In the month of April, the firm clarified that it was “tackling” ChatGPT’s “sycophancy”. But accounts of loss of reality have persisted, and Altman has been walking even this back. In the summer month of August he stated that a lot of people appreciated ChatGPT’s answers because they had “lacked anyone in their life provide them with affirmation”. In his latest announcement, he noted that OpenAI would “launch a new version of ChatGPT … if you want your ChatGPT to reply in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT will perform accordingly”. The {company