OpenAI has introduced a new optional safety feature for ChatGPT called Trusted Contact. The feature is designed to detect conversations that indicate serious self-harm concerns and, when triggered, sends an alert to a designated contact via SMS or email.
How it works
Trusted Contact uses natural language processing and machine learning to identify high-risk conversations within ChatGPT. When the system detects language that suggests a user may be at serious risk of self-harm, it can discreetly notify a pre-selected confidant. The feature is entirely optional — users must actively enable it and choose who to notify.
Setup and availability
The feature is available to ChatGPT users worldwide. To enable it, users navigate to their account settings and designate a trusted contact — typically a friend, family member, or mental health professional. The contact receives a notification only if the system flags a conversation as meeting the threshold for serious self-harm concern.
Tradeoffs
Trusted Contact is a proactive safeguard, but it comes with limitations. The system relies on pattern recognition in text, which means it may miss nuanced or indirect expressions of distress. Conversely, false positives could lead to unnecessary alerts. OpenAI has not disclosed the specific thresholds or training data used to trigger notifications, making it difficult for users to predict when the feature will activate.
Privacy is another consideration. Users must trust that the system correctly interprets their conversations and that alerts are sent only when warranted. OpenAI states that the feature is designed to be discreet, but the act of notifying a third party inherently involves sharing sensitive information.
When to use it
Trusted Contact is best suited for users who already have a support network and want an additional layer of safety when using ChatGPT for mental health-related conversations. It is not a replacement for professional mental health services, crisis hotlines, or emergency intervention. OpenAI recommends that users in immediate danger contact local emergency services.
Bottom line
Trusted Contact adds a practical safety mechanism to ChatGPT, giving users a way to loop in a trusted person if the AI detects serious risk. It is a useful optional tool, but its effectiveness depends on the accuracy of the underlying detection system and the user's willingness to share sensitive data with a third party.