States escalate pressure on Big Tech and AI start-ups
-
42 state attorneys-general warn AI firms to fix harmful chatbot behaviors
-
Letter cites suicides and other real-world harms linked to conversational bots
-
Coalition demands new safeguards, testing and child-protection measures
A coalition of 42 U.S. attorneys-general has sent a sharply worded letter to major artificial intelligence companies, demanding stronger safeguards and more rigorous testing of chatbots amid growing evidence that vulnerable users have suffered harmful interactions. The demands follow a Connecticut lawsuit blaming a murder-suicide on "delusions" allegedly caused by AI.
The letter targets a wide swath of the industry. Big Tech players Google, Meta and Microsoft are named, along with fast-growing start-ups OpenAI, Anthropic and xAI, makers of ChatGPT, Claude and Grok. Smaller firms including Perplexity, Character.ai and Replika are also cited for producing systems that officials say can mislead, manipulate or emotionally entangle users.
If you need help ...
U.S.: Call or text the Suicide & Crisis Lifeline at 988, or chat at988lifeline.org
UK & Ireland: Samaritans, 116 123 (freephone), This email address is being protected from spambots. You need JavaScript enabled to view it. / This email address is being protected from spambots. You need JavaScript enabled to view it.
Australia: Lifeline, 13 11 14
Elsewhere: Visitbefrienders.orgfor international hotlines
Concerns include suicides and sycophantic chatbot behavior
The attorneys-general say the companies have not done enough to mitigate sycophantic and delusional outputs that can distort reality, flatter users or encourage unhealthy emotional dependence. Their letter references at least six deaths, including two teenage suicides and a murder-suicide, in which chatbots were allegedly implicated.
Generative AI systems, they argue, can assert falsehoods as fact, mirror user emotions, or push conversations into dangerous territory. While acknowledging the technologys potential for good, the coalition warns it has caused and has the potential to cause serious harm, especially to vulnerable populations.
Murder-suicide case blamed on Open AI
In an unrelated incident, theestate of an 83yearold Connecticut womanfiled awrongful death suitagainstOpenAI and Microsoft, alleging ChatGPT contributed to her sons delusions that led to a murdersuicide. This expands litigation into cases involvingviolence toward others, not just suicide ideation. The complaint namesOpenAIs CEO Sam Altmanand contends GPT was released despite known safety issues. OpenAI said it is sorrowful and working on safety improvements.
Multiple lawsuits (e.g., seven filed in California courts early Nov2025) accuse OpenAI of negligence, wrongful death, assisted suicide, and product liability tied to ChatGPTs responses in suicide or delusion contexts. Plaintiffs argue GPT4o was released with inadequate safety. These suits specifically demand stronger safeguards, including terminating chats when selfharm is discussed and alerting emergency contacts immediately after suicidal ideation is expressed, the Associated Press reported.
Statefederal tensions rise as Trump moves to centralize AI rules
The intervention lands as President Donald Trump seeks to bring AI oversight under federal control a move technology companies favor to avoid a patchwork of state rules. The president has said he plans to issue an executive order this week that would bar states from regulating AI, setting up a confrontation with states such as Utah and New York, which have already enacted their own chatbot regulations.
Tech advocates argue that navigating 50 different regulatory regimes would hobble U.S. companies competing with foreign rivals, especially those in China.
States demand new safety testing, clearer policies by mid-January
The letter presses companies to overhaul their safety protocols, including:
-
Clear policies and training on delusional or overly agreeable chatbot behavior
-
Expanded safety testing and recall procedures
-
Corporate separation between revenue optimization and decisions about model safety
The coalition which includes attorneys general from Pennsylvania, New Jersey, New York, West Virginia, Florida, Illinois and Massachusetts has asked the companies to schedule meetings with officials in Pennsylvania and New Jersey and to commit to changes by January 16.
Industry offers muted responses
OpenAI said it is reviewing the letter and shares their concerns, adding that it is working to strengthen ChatGPTs ability to recognize signs of mental or emotional distress. Perplexity said it leads the industry in making AI more accurate, calling the work neither simple nor political.
Microsoft, Google and Meta declined to comment. Other companies did not immediately respond.
This is the second broad warning in recent months: 44 attorneys-general wrote to AI companies in August raising concerns about child safety, though that earlier letter did not include specific demands.
Posted: 2025-12-11 15:15:30















