ChatGPT adds mental health safety feature

By Published On: May 8, 2026Last Updated: May 11, 2026
ChatGPT adds mental health safety feature

ChatGPT is rolling out Trusted Contact, an optional safety feature that lets adults nominate someone who may be notified over serious self-harm concerns.

The update gives ChatGPT users over 18 a way to set up a support contact in advance, while keeping crisis services, localised helplines and emergency guidance as separate safeguards.

Chris Lehane, chief global affairs officer at OpenAI, posted about the feature on LinkedIn, describing Trusted Contact as a way for AI systems to encourage connection to “trusted people, care, and offline support systems during sensitive moments.”

Users can add one adult as their Trusted Contact through ChatGPT settings. The nominated person receives an invitation explaining the role and must accept it within one week before the feature becomes active.

If OpenAI’s automated systems detect a conversation that may indicate a serious self-harm concern, ChatGPT tells the user their Trusted Contact may be notified.

The chatbot also encourages the user to contact that person directly and offers suggested ways to start the conversation.

Dr Arthur Evans, chief executive officer of the American Psychological Association, said: “Psychological science consistently shows that social connection is a powerful protective factor, especially during periods of emotional distress.

“Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most.”

OpenAI says a small team of specially trained reviewers then assesses the situation. If they decide the conversation may indicate a serious safety concern, ChatGPT sends the Trusted Contact a brief alert by email, text message or in-app notification if they have a ChatGPT account.

The company says the notification is limited to the general reason that self-harm came up in a potentially concerning way and encourages the Trusted Contact to check in. It does not include chat details or transcripts.

Users can remove or edit their Trusted Contact in settings, while the nominated person can remove themselves through OpenAI’s help centre.

OpenAI says the feature does not replace crisis services, emergency care or professional mental health support. ChatGPT will still encourage users to contact crisis hotlines or emergency services where appropriate.

The company says every Trusted Contact notification goes through trained human review before it is sent, and that it aims to review these safety notifications in under one hour. OpenAI also says notifications may not always reflect exactly what someone is experiencing.

OpenAI says the feature was developed with guidance from clinicians, researchers and mental health organisations. The work was informed by its Global Physicians Network, which includes more than 260 licensed physicians across 60 countries, and its Expert Council on Well-Being and AI.

Dr Munmun De Choudhury, J. Z. Liang professor of interactive computing at Georgia Tech and member of the Expert Council on Well-Being and AI, said: “One of AI’s biggest promises is how it can foster authentic human-to-human connection and psychological safety.

“I am encouraged by ChatGPT’s Trusted Contact feature, which offers a step forward to human empowerment, especially during moments of vulnerability.”

Trusted Contact builds on OpenAI’s parental controls safety notifications, which allow parents or guardians to receive alerts when signs of acute distress are detected for a linked teen account.

The new feature extends safety alert options to users over 18 who choose to add a trusted adult.

OpenAI says it has also worked with more than 170 mental health experts to improve ChatGPT’s ability to detect and respond to signs of distress, de-escalate sensitive conversations, refuse harmful requests and guide users towards real-world support.

The rollout puts more safety controls directly into ChatGPT settings rather than leaving them only at system level.

For schools, universities and EdTech providers watching how AI platforms handle safeguarding, the next test is whether optional contact-based alerts become a broader design pattern across student-facing AI tools.

AI-enabled care: Transforming safety and independence for vulnerable adults
Google Health app launched to replace FitBit