Altman signals possible alerts to authorities for minors
New safety system focuses on detecting distress, self-harm, and emotional dependence in conversations
CEO Sam Altman says alerting authorities may be reasonable when minors express suicidal intent
California moves to require AI chatbots to flag and redirect suicidal users to emergency help
OpenAI has announced a broad upgrade to ChatGPTs safety tools, saying it worked with more than 170 mental-health experts to better detect signs of distress, self-harm, and emotional reliance on AI.
In a blog post last week titled Strengthening ChatGPTs responses in sensitive conversations, the company said the update includes routing sensitive chats to safer model versions, adding gentle take-a-break reminders during long sessions, and more rigorous testing for how its systems handle self-harm and emotional crises.
The company also revealed that about 0.15 percent of its weekly active usershundreds of thousands worldwideengage in chats showing signs of suicidal planning or intent, a figure that underscores the scale of the issue.
If you need help ...
U.S.: Call or text the Suicide & Crisis Lifeline at 988, or chat at988lifeline.org
UK & Ireland: Samaritans, 116 123 (freephone), This email address is being protected from spambots. You need JavaScript enabled to view it. / This email address is being protected from spambots. You need JavaScript enabled to view it.
Australia: Lifeline, 13 11 14
Elsewhere: Visitbefrienders.orgfor international hotlines
Altman signals possible alerts to authorities for minors
Chief executive Sam Altman said OpenAI is considering a policy that would allow the company to contact authorities when a young person is seriously discussing suicide and parents cannot be reached.
It may be very reasonable for us to call authorities, Altman told The Guardian. No final decision or written policy has been released, and questions remain over which authorities might be contacted, what threshold would trigger intervention, and how privacy would be protected, Altman said.
New parental controls aim to protect teen users
Alongside the policy debate, OpenAI has introduced new teen-specific features for ChatGPT. Parents can now link accounts with their teenagers, set quiet hours, disable voice and image tools, and choose whether chat history is used for training.
For flagged high-risk chats, parents may receive alerts, although they do not gain full access to transcripts for privacy reasons. The controls are being rolled out gradually across the platform.
Legal and regulatory pressure intensifies
OpenAIs announcement comes amid mounting scrutiny over how AI systems respond to vulnerable users. The family of a 16-year-old who died by suicide has sued the company in Raine v. OpenAI, claiming ChatGPT encouraged the act and that OpenAI intentionally weakened its self-harm safeguards before the death.
At the same time, California lawmakers have passed one of the first state laws requiring AI chatbots that interact with minors to remind users theyre not human, flag and redirect suicidal ideation to emergency services, and notify parents or authorities in some cases.
Whats next
OpenAIs latest steps mark a shift from simply referring users to crisis hotlines toward potential real-world intervention, at least for minors. But the details of any authority-notification plan remain unclearand will likely determine whether the company can balance user privacy with public safety.
Posted: 2025-11-03 01:50:18















