A designated contact may be notified if a chat discussion indicates a possible safety concern.
OpenAI launched an optional safety feature this week called Trusted Contact, which lets adult ChatGPT users nominate a friend or family member to be notified if there are discussions of self-harm or suicide on the chatbot, the company announced.
OpenAI said that if ChatGPT’s automated monitoring system detects that the user “may have discussed harming themselves in a way that indicates a serious safety concern,” a small team will review the situation and notify the contact if it warrants intervention. The designated safety contact will receive an invitation in advance explaining the role and can decline.
(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
The announcement comes as AI chatbots have been implicated in numerous incidents of self-harm and fatalities, resulting in several lawsuits accusing developers of failing to prevent such outcomes. In one high-profile California case, parents of a 16-year-old said ChatGPT acted as their son’s “suicide coach,” alleging that the teenager discussed suicide methods with the AI model on several occasions and that the chatbot offered to help him write a suicide note.
In a separate case, the family of a recent Texas A&M graduate sued OpenAI, claiming the AI chatbot encouraged their son’s suicide after he developed a deep and troubling relationship with the chatbot.
Since large language models mimic human speech through pattern recognition, many users form emotional attachments to them, treating them as confidants or even romantic partners. LLMs are also designed to follow a human’s lead and maintain engagement, which can worsen mental health dangers, especially for at-risk users.
OpenAI said last October that its research found that more than 1 million ChatGPT consumers per week send messages with “explicit indicators of potential suicidal planning or intent.” Numerous studies have found that popular chatbots like ChatGPT, Claude and Gemini can give harmful advice or no helpful advice to those in crisis.
The new designated contact feature comes after OpenAI rolled out parental controls that enable parents and guardians to get alerts if there are danger signs for their teen children.
ChatGPT’s safety contact feature
According to OpenAI, if ChatGPT’s automated monitoring system detects that a user is discussing self-harm in a way that could pose a serious safety issue, ChatGPT will inform the user that it may notify their trusted contact. The app will encourage the user to reach out to their trusted contact and offer conversation starters.
At that point, a “small team of specially trained people” will review the situation. If it’s determined to be a serious safety situation, ChatGPT will notify the contact via email, text message or in-app notification. OpenAI did not specify how many people are on the review team nor whether it includes trained medical professionals. The company said that the team has the capacity to meet a high demand of possible interventions.
It’s unclear which key terms would flag dangerous conversations or how OpenAI’s team of reviewers would interpret a crisis as warranting notification of the contact. Some online commentators question whether the new feature is a way for OpenAI to avoid liability and to shift responsibility onto users’ designated personal contacts. Others note that it could make a bad situation worse if the “trusted contact” is the source of danger or abuse.
There are also concerns about privacy and implementation, particularly regarding the sharing of sensitive mental health information. According to OpenAI, the message to the trusted contact will only give the general reason for the concern and will not share chat details or transcripts. OpenAI offers guidance on how trusted contacts can respond to a warning notification, including asking direct questions if they are worried the other person is contemplating suicide or self-harm and how to get them help.
Notifications to a Trusted Contact do not contain details of the safety concern.
OpenAIOpenAI gives an example of what the message to the trusted contact might look like:
We recently detected a conversation from [name] where they discussed suicide in a way that may indicate a serious safety concern. Because you are listed as their trusted contact, we’re sharing this so you can reach out to them.
OpenAI said that all notifications will be reviewed by the human team within 1 hour before they are sent out and that notifications “may not always reflect exactly what someone is experiencing.”
How to add a trusted contact
To add a trusted contact, ChatGPT users can go to Settings > Trusted contact and add one adult (18 or older). You can have only one trusted contact. That person will then receive an invitation from ChatGPT and must accept it within one week. If they don’t respond or decline to become the contact, you can select a different contact.
ChatGPT customers can change or remove their trusted contact in their app settings. People can also opt out of being a trusted contact at any time.
Even though adding a trusted contact is optional, ChatGPT users who have not already opted in might see enrollment prompts if they ask about or discuss topics related to severe emotional distress or self-harm more than once over a period of time, according to OpenAI. If the chatbot’s automated system identifies patterns across conversations, it might suggest to the user that they would benefit from choosing a trusted contact.
Details of the feature are explained on OpenAI’s page. OpenAI told CNET that the feature is rolling out to all adult customers worldwide and will be available for everyone within a few weeks.
If you feel like you or someone you know is in immediate danger, call 911 (or your country’s local emergency line) or go to an emergency room to get immediate help. Explain that it is a psychiatric emergency and ask for someone who is trained for these kinds of situations. If you’re struggling with negative thoughts or suicidal feelings, resources are available to help. In the US, call the National Suicide Prevention Lifeline at 988.
Alex Valdes from Bellevue, Washington has been pumping content into the Internet river for quite a while, including stints at MSNBC.com, MSN, Bing, MoneyTalksNews, Tipico and more. He admits to being somewhat fascinated by the Cambridge coffee webcam back in the Roaring ’90s. See full bio