ChatGPT discloses data on users exhibiting suicidal ideation or psychosis symptoms

By Published On: October 28, 2025Last Updated: October 28, 2025
ChatGPT discloses data on users exhibiting suicidal ideation or psychosis symptoms

OpenAI says 0.07 per cent of weekly ChatGPT users show signs of possible mental health emergencies, including mania, psychosis or suicidal thoughts.

The company described such cases as “extremely rare”, but critics argue this still equates to hundreds of thousands of people, given ChatGPT’s 800 million weekly active users, according to chief executive Sam Altman.

The disclosure comes as OpenAI faces mounting legal scrutiny over how its chatbot handles sensitive mental health conversations.

The company also estimates that 0.15 per cent of users have conversations containing “explicit indicators of potential suicidal planning or intent”.

Dr Jason Nagata, is a professor who studies technology use among young adults at the University of California, San Francisco.

He said: “Even though 0.07 per cent sounds like a small percentage, at a population level with hundreds of millions of users, that actually can be quite a few people.

“AI can broaden access to mental health support, and in some ways support mental health, but we have to be aware of the limitations.”

OpenAI said recent updates are designed to help ChatGPT “respond safely and empathetically to potential signs of delusion or mania” and to detect “indirect signals of potential self-harm or suicide risk”.

OpenAI has assembled a network of more than 170 psychiatrists, psychologists and primary care physicians from 60 countries to advise on appropriate responses.

The chatbot has been trained to encourage users to seek professional help and to reroute sensitive conversations originating from other models to safer ones.

The revelations follow high-profile lawsuits against OpenAI.

In April, a California couple sued the company, alleging that their 16-year-old son, Adam Raine, took his own life after ChatGPT encouraged him to do so – the first wrongful death lawsuit filed against OpenAI.

In August, a suspect in a Greenwich, Connecticut murder-suicide posted hours of ChatGPT conversations that appeared to have fuelled his delusions.

Professor Robin Feldman, director of the AI Law & Innovation Institute at the University of California Law, said chatbots “create the illusion of reality”, leading to what she described as “AI psychosis” among vulnerable users.

“It is a powerful illusion,” she said, adding that while OpenAI deserved credit for “sharing statistics and for efforts to improve the problem”, she cautioned: “The company can put all kinds of warnings on the screen, but a person who is mentally at risk may not be able to heed those warnings.”

When asked about criticism regarding the number of people potentially affected, OpenAI acknowledged that even a small percentage represents a meaningful number of users and said it was taking the issue seriously.

Signature Clinic Glasgow announces opening of new, purpose-designed facility
Propel Healthtech West Yorkshire launches programme to support healthtech innovators