AI

Introducing Trusted Contact in ChatGPT

A new safeguard for vulnerable users: ChatGPT's Trusted Contact feature now flags serious self-harm concerns, sending alerts to designated confidants via SMS or email, leveraging natural language processing and machine learning to identify high-risk conversations and discreetly notify trusted contacts. This proactive approach aims to mitigate the risks associated with AI-powered mental health support. The feature is an optional setting, available to users worldwide.

OpenAI has introduced a new optional safety feature for ChatGPT called Trusted Contact. The feature is designed to detect conversations that indicate serious self-harm concerns and, when triggered, sends an alert to a designated contact via SMS or email.

How it works

Trusted Contact uses natural language processing and machine learning to identify high-risk conversations within ChatGPT. When the system detects language that suggests a user may be at serious risk of self-harm, it can discreetly notify a pre-selected confidant. The feature is entirely optional — users must actively enable it and choose who to notify.

Setup and availability

The feature is available to ChatGPT users worldwide. To enable it, users navigate to their account settings and designate a trusted contact — typically a friend, family member, or mental health professional. The contact receives a notification only if the system flags a conversation as meeting the threshold for serious self-harm concern.

Tradeoffs

Trusted Contact is a proactive safeguard, but it comes with limitations. The system relies on pattern recognition in text, which means it may miss nuanced or indirect expressions of distress. Conversely, false positives could lead to unnecessary alerts. OpenAI has not disclosed the specific thresholds or training data used to trigger notifications, making it difficult for users to predict when the feature will activate.

Privacy is another consideration. Users must trust that the system correctly interprets their conversations and that alerts are sent only when warranted. OpenAI states that the feature is designed to be discreet, but the act of notifying a third party inherently involves sharing sensitive information.

When to use it

Trusted Contact is best suited for users who already have a support network and want an additional layer of safety when using ChatGPT for mental health-related conversations. It is not a replacement for professional mental health services, crisis hotlines, or emergency intervention. OpenAI recommends that users in immediate danger contact local emergency services.

Bottom line

Trusted Contact adds a practical safety mechanism to ChatGPT, giving users a way to loop in a trusted person if the AI detects serious risk. It is a useful optional tool, but its effectiveness depends on the accuracy of the underlying detection system and the user's willingness to share sensitive data with a third party.

Similar Articles

More articles like this

AI 4 min

From Screenshot to Live Product: How to Build Real AI Websites with Stitch, Claude Code, and Vercel

AI website builders often generate beautiful but non-functional designs. This guide presents a practical workflow combining Google Stitch for design, Claude Code for engineering, and Vercel for deployment. It includes step-by-step setup instructions, a critical verification prompt, and pro tips to ensure your site is a real product, not just a demo.

AI 1 min

Advancing voice intelligence with new models in the API

A new wave of conversational AI is unfolding with the introduction of real-time voice models in the OpenAI API, which can perform multimodal reasoning, neural machine translation, and automatic speech recognition, setting the stage for more sophisticated voice assistants and intelligent interfaces that blur the line between human and machine interaction. These models leverage transformer architectures and large-scale language datasets to achieve state-of-the-art performance in speech-to-text and text-to-speech applications.

AI 3 min

Claude Agents Get 'Dreaming' to Clean Up Memory Between Sessions

Anthropic has introduced 'dreaming,' a memory consolidation feature for Claude Managed Agents that mimics biological REM sleep. The tool reorganizes stored knowledge, removes outdated or contradictory entries, and improves task performance by 10%. Alongside this, Anthropic has made multi-agent orchestration and outcome-guided agents generally available, expanding the capabilities of its AI coding assistants.

AI 3 min

Google Rules Out Liquid Glass for Android—Here’s What’s Next

Google has officially denied rumors that Android will adopt Apple’s Liquid Glass design, following a brief teaser that sparked speculation. Android ecosystem president Sameer Samat and other Google representatives dismissed the idea, reaffirming the company’s commitment to Material 3 Expressive. The upcoming Android Show on May 12 is expected to focus on other features, including a rumored Pixel phone notification LED system called "Pixel Glow."

AI 3 min

Gemini for Mac to Gain Autonomous Control—Rivaling Claude’s Agent

Google is preparing to expand its Gemini macOS app with agentic capabilities, allowing the AI to autonomously control a user’s computer—clicking, typing, and organizing files. The move follows Anthropic’s Claude Cowork, which already offers similar desktop automation for subscribers. While Google has not officially confirmed the feature, a teardown of the Gemini Mac app reveals preparations for screen access and accessibility permissions. The update could arrive as soon as Google I/O 2026, aligning with the company’s broader push into agentic AI.

AI 1 min

Testing ads in ChatGPT

OpenAI’s quiet rollout of sponsored responses inside ChatGPT’s conversational loop—delivered via a new “sponsored prompt” flag in the v4.5 API—threatens to erode the last ad-free bastion of the web while sidestepping the latency and UX pitfalls of traditional banner placements. Early tests show click-through rates north of 12%, dwarfing search ads, yet the same contextual targeting risks turning every chat into a surveillance feed.