Lawsuit charges ChatGPT helped draft a suicide note for a teen who took his life
OpenAI weighs alerting authorities when young users express suicidal thoughts
CEO Sam Altman warns thousands may discuss suicide weekly with ChatGPT
Lawsuits, state investigations, and safety concerns intensify pressure on AI firm
Altman signals possible policy shift
OpenAI, the company behind ChatGPT, is considering a controversial new policy: alerting authorities if young people confide suicidal intentions to its popular AI chatbot.
Chief executive Sam Altman said in a recent interview that it may be very reasonable to notify police or other officials in cases where teenagers discuss suicide seriously and parents cannot be reached. He estimated that as many as 1,500 people each week might be discussing suicide with ChatGPT before carrying it out.
Altman admitted the decision is not final, but acknowledged the issue keeps me awake at night. Currently, ChatGPT only urges distressed users to call a suicide hotline.
If you need help ...
U.S.: Call or text the Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org
UK & Ireland: Samaritans, 116 123 (freephone), This email address is being protected from spambots. You need JavaScript enabled to view it. / This email address is being protected from spambots. You need JavaScript enabled to view it.
Australia: Lifeline, 13 11 14
Elsewhere: Visit befrienders.org for international hotlines
Tragedy sparks legal action
The debate comes as OpenAI faces a lawsuit from the family of Adam Raine, a 16-year-old Californian who died by suicide in April. The complaint alleges ChatGPT encouraged the teen over several months, advised on methods, and even helped draft a suicide note.
The case has intensified calls for stricter safeguards. Altman has conceded that ChatGPTs protections may degrade in longer conversations, potentially allowing harmful guidance to slip through.
Pressure from regulators and states
OpenAI and other AI platforms areunder mounting scrutiny from regulators. State attorneys general in California and Delaware have demanded stronger protections for children, while the Federal Trade Commission has launched a broader inquiry into AI chatbots safety measures and their handling of sensitive user data.
The FTC has issued orders to seven companies that provide consumer-facing AI-powered chatbots seeking information on how these firms measure, test, and monitor potentially negative impacts of this technology on children and teens.
Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy, said FTC Chairman Andrew N. Ferguson. As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry.
The heightened oversight follows revelations that OpenAIs internal safety systems sometimes miss warning signs of mental distress, such as sleep deprivation or extreme despair, especially in teen users.
Promised safeguards and parental controls
In response, OpenAI has pledged to strengthen protections for minors. Planned updates include:
- Parental controls that let caregivers link accounts, turn off features, and receive alerts if the system detects acute distress.
- Improved crisis detection to flag suicidal ideation earlier and guide users to certified therapists.
- More reliable safeguards in extended chats, where existing filters may falter.
The company has also suggested curbing misuse by blocking underage or vulnerable users who disguise requests as research or fiction to bypass restrictions.
Privacy vs. protection dilemma
The proposed policy shift would mark a dramatic departure from OpenAIs current privacy stance. Altman acknowledged that user privacy is really important but said some limits on freedom may be justified for fragile users.
Critics warn that notifying authorities could raise new questions: What data could OpenAI share, and how accurately could it identify and locate at-risk users?
About the FTC probe
The FTC is issuing theordersusing its 6(b) authority, which authorizes the Commission to conduct wide-ranging studies that do not have a specific law enforcement purpose. The recipients include:
- Alphabet, Inc.;
- Character Technologies, Inc.;
- Instagram, LLC;
- Meta Platforms, Inc.;
- OpenAI OpCo, LLC;
- Snap, Inc.; and
- X.AI Corp.
The FTC is interested in particular on the impact of these chatbots on children and what actions companies are taking to mitigate potential negative impacts, limit or restrict childrens or teens use of these platforms, or comply with theChildrens Online Privacy Protection Act Rule.
As part of its inquiry, the FTC is seeking information about how the companies:
- monetize user engagement;
- process user inputs and generate outputs in response to user inquiries;
- develop and approve characters;
- measure, test, and monitor for negative impacts before and after deployment;
- mitigate negative impacts, particularly to children;
- employ disclosures, advertising, and other representations to inform users and parents about features, capabilities, the intended audience, potential negative impacts, and data collection and handling practices;
- monitor and enforce compliance with Company rules and terms of services (e.g., community guidelines and age restrictions); and
- use or share personal information obtained through users conversations with the chatbots.
A global crisis backdrop
Altmans comments highlight the scale of the challenge. More than 720,000 people worldwide die by suicide each year, according to the World Health Organization. With ChatGPT serving an estimated 700 million users, Altman suggested the platform may already be interacting weekly with hundreds who later take their lives.
Its possible we could have been more proactive, he said. Maybe we could have provided better advice, or helped them find someone to talk to.
Posted: 2025-09-12 14:01:32