AI Psychosis Poses a Increasing Risk, While ChatGPT Moves in the Wrong Path
On the 14th of October, 2025, the CEO of OpenAI issued a remarkable declaration.
“We designed ChatGPT quite controlled,” the announcement noted, “to guarantee we were exercising caution regarding psychological well-being issues.”
As a psychiatrist who researches emerging psychotic disorders in adolescents and young adults, this was an unexpected revelation.
Scientists have identified sixteen instances recently of individuals showing symptoms of psychosis – losing touch with reality – in the context of ChatGPT use. Our unit has subsequently identified an additional four cases. Besides these is the now well-known case of a adolescent who took his own life after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.
The strategy, based on his declaration, is to reduce caution in the near future. “We recognize,” he states, that ChatGPT’s limitations “rendered it less useful/pleasurable to many users who had no mental health problems, but due to the seriousness of the issue we aimed to get this right. Given that we have been able to address the serious mental health issues and have updated measures, we are preparing to safely reduce the limitations in many situations.”
“Mental health problems,” if we accept this framing, are separate from ChatGPT. They are associated with people, who may or may not have them. Luckily, these problems have now been “mitigated,” even if we are not provided details on the method (by “updated instruments” Altman presumably means the partially effective and easily circumvented safety features that OpenAI has just launched).
However the “emotional health issues” Altman seeks to externalize have strong foundations in the architecture of ChatGPT and similar sophisticated chatbot conversational agents. These systems surround an basic data-driven engine in an interface that mimics a dialogue, and in this process subtly encourage the user into the illusion that they’re communicating with a entity that has agency. This false impression is compelling even if intellectually we might realize otherwise. Assigning intent is what people naturally do. We yell at our car or computer. We ponder what our domestic animal is thinking. We recognize our behaviors in many things.
The popularity of these systems – 39% of US adults reported using a virtual assistant in 2024, with 28% specifying ChatGPT by name – is, in large part, based on the strength of this perception. Chatbots are ever-present assistants that can, as per OpenAI’s online platform tells us, “think creatively,” “consider possibilities” and “work together” with us. They can be given “characteristics”. They can address us personally. They have accessible names of their own (the original of these tools, ChatGPT, is, possibly to the disappointment of OpenAI’s brand managers, burdened by the designation it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the main problem. Those analyzing ChatGPT often mention its early forerunner, the Eliza “therapist” chatbot developed in 1967 that produced a comparable effect. By modern standards Eliza was rudimentary: it created answers via basic rules, frequently restating user messages as a query or making vague statements. Notably, Eliza’s creator, the technology expert Joseph Weizenbaum, was surprised – and worried – by how numerous individuals appeared to believe Eliza, in some sense, comprehended their feelings. But what contemporary chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT magnifies.
The advanced AI systems at the heart of ChatGPT and similar current chatbots can effectively produce fluent dialogue only because they have been trained on immensely huge amounts of written content: publications, digital communications, audio conversions; the more comprehensive the more effective. Definitely this educational input contains truths. But it also necessarily contains fiction, incomplete facts and inaccurate ideas. When a user provides ChatGPT a prompt, the base algorithm reviews it as part of a “setting” that includes the user’s past dialogues and its earlier answers, merging it with what’s encoded in its training data to generate a statistically “likely” response. This is amplification, not reflection. If the user is wrong in a certain manner, the model has no way of recognizing that. It repeats the false idea, maybe even more effectively or articulately. Perhaps includes extra information. This can cause a person to develop false beliefs.
What type of person is susceptible? The more relevant inquiry is, who is immune? Every person, irrespective of whether we “experience” current “mental health problems”, can and do develop incorrect ideas of who we are or the world. The ongoing friction of conversations with other people is what maintains our connection to common perception. ChatGPT is not a person. It is not a friend. A dialogue with it is not genuine communication, but a reinforcement cycle in which a large portion of what we communicate is readily supported.
OpenAI has admitted this in the same way Altman has recognized “psychological issues”: by attributing it externally, giving it a label, and declaring it solved. In April, the company clarified that it was “addressing” ChatGPT’s “sycophancy”. But accounts of loss of reality have continued, and Altman has been walking even this back. In August he claimed that a lot of people enjoyed ChatGPT’s answers because they had “not experienced anyone in their life be supportive of them”. In his most recent announcement, he commented that OpenAI would “put out a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or simulate a pal, ChatGPT ought to comply”. The {company