OpenAI has introduced a new 'Trusted Contact' feature for ChatGPT that allows the AI system to notify designated emergency contacts if it detects signs of potential self-harm in user conversations. The feature is part of the company's expanded safety measures to protect vulnerable users while maintaining privacy. This development reflects growing industry attention to AI's role in mental health support and crisis intervention.
Background
As AI chatbots become more prevalent, there are increasing concerns about their ability to handle sensitive mental health conversations and provide appropriate support during crisis situations. Tech companies are implementing various safeguards to address these concerns.
- Source
- TechCrunch
- Published
- May 8, 2026 at 04:20 AM
- Score
- 6.0 / 10