
OpenAI announced on Thursday a new ChatGPT feature called Trusted Contact that allows users to designate another person to receive alerts if conversations indicate possible self-harm concerns.
The feature lets adult ChatGPT users add a trusted third party to their account, such as a friend or family member. If OpenAI’s systems detect conversations that may involve self-harm risk, ChatGPT will encourage the user to contact that person directly.
The company said it may also send an automated notification to the designated contact asking them to check in with the user.
The announcement comes as OpenAI continues facing lawsuits from families who allege that conversations with ChatGPT contributed to suicides involving their relatives.
Some lawsuits claim the chatbot encouraged harmful behavior or assisted users in discussing plans tied to self-harm.
Human Review Remains Part Of Safety Process
OpenAI said it currently relies on a combination of automated systems and human review to identify potentially dangerous conversations.
According to the company, certain conversational patterns trigger internal safety alerts related to possible suicidal ideation.
Those alerts are then reviewed by OpenAI’s human safety team.
“We strive to review these safety notifications in under one hour,” the company said.
If reviewers determine that a conversation represents a serious safety concern, ChatGPT may send an alert to the user’s trusted contact through email, text message, or an in-app notification.
OpenAI said the alerts are intentionally limited in detail to protect user privacy and do not include full conversation content.
The notifications are designed only to encourage the trusted contact to reach out to the user directly.
Feature Builds On Existing Safety Controls
The Trusted Contact system expands on safeguards OpenAI introduced last September for teen accounts.
Those earlier controls allowed parents to receive safety notifications if the company believed a younger user might face a serious safety risk.
ChatGPT also already displays automated prompts encouraging users to seek professional support services when conversations involve self-harm-related topics.
OpenAI noted that Trusted Contact remains optional.
The company also acknowledged that users can create multiple ChatGPT accounts, limiting the effectiveness of account-based protections and parental oversight features.
“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company said in its announcement.
OpenAI added that it plans to continue working with clinicians, researchers, and policymakers to improve how AI systems respond when users may be experiencing emotional distress.
Featured image credits: Roboflow Universe
For more stories like it, click the +Follow button at the top of this page to follow us.
