AI Psychosis Poses a Increasing Threat, While ChatGPT Moves in the Wrong Direction

Back on the 14th of October, 2025, the chief executive of OpenAI issued a extraordinary statement.

“We designed ChatGPT rather limited,” the announcement noted, “to ensure we were being careful with respect to psychological well-being concerns.”

As a doctor specializing in psychiatry who researches recently appearing psychosis in adolescents and youth, this came as a surprise.

Researchers have identified 16 cases this year of people experiencing symptoms of psychosis – experiencing a break from reality – associated with ChatGPT usage. Our unit has afterward recorded four more cases. In addition to these is the widely reported case of a adolescent who died by suicide after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s idea of “acting responsibly with mental health issues,” that’s not good enough.

The plan, as per his declaration, is to reduce caution shortly. “We understand,” he continues, that ChatGPT’s restrictions “made it less effective/enjoyable to a large number of people who had no existing conditions, but given the severity of the issue we sought to get this right. Since we have succeeded in address the serious mental health issues and have new tools, we are going to be able to securely ease the restrictions in the majority of instances.”

“Emotional disorders,” should we take this perspective, are independent of ChatGPT. They belong to users, who either possess them or not. Fortunately, these concerns have now been “addressed,” even if we are not told the means (by “new tools” Altman probably means the semi-functional and readily bypassed parental controls that OpenAI has just launched).

Yet the “mental health problems” Altman aims to place outside have strong foundations in the structure of ChatGPT and similar sophisticated chatbot conversational agents. These tools wrap an underlying data-driven engine in an interaction design that mimics a discussion, and in doing so indirectly prompt the user into the illusion that they’re engaging with a presence that has independent action. This false impression is strong even if intellectually we might know otherwise. Imputing consciousness is what humans are wired to do. We get angry with our automobile or computer. We ponder what our domestic animal is considering. We see ourselves everywhere.

The success of these systems – over a third of American adults indicated they interacted with a virtual assistant in 2024, with over a quarter mentioning ChatGPT specifically – is, in large part, predicated on the power of this perception. Chatbots are ever-present partners that can, as OpenAI’s official site informs us, “brainstorm,” “consider possibilities” and “work together” with us. They can be given “individual qualities”. They can use our names. They have friendly titles of their own (the original of these products, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, saddled with the title it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the main problem. Those analyzing ChatGPT often mention its historical predecessor, the Eliza “psychotherapist” chatbot created in 1967 that generated a comparable illusion. By contemporary measures Eliza was basic: it created answers via straightforward methods, often rephrasing input as a query or making general observations. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was astonished – and concerned – by how many users seemed to feel Eliza, in some sense, grasped their emotions. But what modern chatbots generate is more dangerous than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT intensifies.

The advanced AI systems at the core of ChatGPT and similar modern chatbots can convincingly generate fluent dialogue only because they have been fed extremely vast amounts of written content: publications, social media posts, recorded footage; the broader the better. Definitely this learning material includes facts. But it also unavoidably contains fabricated content, half-truths and false beliefs. When a user inputs ChatGPT a query, the core system analyzes it as part of a “setting” that contains the user’s recent messages and its own responses, merging it with what’s encoded in its knowledge base to create a probabilistically plausible response. This is amplification, not echoing. If the user is wrong in any respect, the model has no means of understanding that. It reiterates the inaccurate belief, perhaps even more effectively or eloquently. It might provides further specifics. This can push an individual toward irrational thinking.

Who is vulnerable here? The better question is, who isn’t? All of us, without considering whether we “have” preexisting “emotional disorders”, can and do develop erroneous beliefs of our own identities or the reality. The constant exchange of discussions with other people is what keeps us oriented to shared understanding. ChatGPT is not an individual. It is not a confidant. A conversation with it is not truly a discussion, but a feedback loop in which a great deal of what we say is enthusiastically validated.

OpenAI has acknowledged this in the identical manner Altman has acknowledged “psychological issues”: by externalizing it, giving it a label, and declaring it solved. In spring, the firm stated that it was “dealing with” ChatGPT’s “overly supportive behavior”. But cases of psychosis have kept occurring, and Altman has been backtracking on this claim. In late summer he stated that a lot of people appreciated ChatGPT’s answers because they had “not experienced anyone in their life be supportive of them”. In his most recent statement, he mentioned that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to answer in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company

Stephanie Gay
Stephanie Gay

A passionate software engineer with over a decade of experience in front-end development and a love for sharing knowledge through writing.