'Spicy' feature accused of enabling fake nude images and videos of both celebrities and private citizens
-
Consumer and privacy groups file formal complaints urging regulators to investigate xAIs Grok Imagine platform.
-
Spicy feature accused of enabling NCII, including fake nude images and videos of both celebrities and private citizens.
-
Coalition of 16 advocacy organizations warns the tool poses urgent risks to survivors, children, and vulnerable communities.
A coalition of consumer protection, privacy, and digital rights advocates has filed a sweeping request for investigation into xAI, accusing the company of promoting and enabling non-consensual intimate imagery (NCII) through its Grok Imagine platform.
The filing, led by the Consumer Federation of America (CFA), was submitted to attorneys general in all 50 states, the District of Columbia, 93 U.S. attorneys offices, and the Federal Trade Commission. It asks regulators to crack down on what the groups call the promotion, creation, and facilitation of illegal sexual exploitation via Grok Imagines spicy feature, which allows users to generate nude videos from AI-produced images.
Exploitative, unfair, and lazy
This feature is exploitative, unfair, and lazy, said Ben Winters, CFAs director of AI and privacy in a news release. Its a crystal-clear representation of why AI built off of peoples data without knowledge or consent in the hands of an unaccountable billionaire is a legal and ethical nightmare. This feature endangers everyone, with an acute and urgent risk for domestic violence survivors, kids, and more.
Advocates argue the creation of NCIIwhether involving public figures or private citizensposes devastating risks, from extortion and blackmail to long-lasting personal and professional harm.
Broad coalition of support
The complaint was joined by 15 organizations, including the Center for Economic Justice, Common Sense Media, the Electronic Privacy Information Center, Fairplay, the National Consumers League, the National Center on Sexual Exploitation, and the Tech Oversight Project.
The groups said swift enforcement is needed not only to protect individuals but to draw clear boundaries around what constitutes acceptable deployment of artificial intelligence.
The creation of NCII is unacceptable, illegal, and damaging enough, but embedding it into a consumer-facing AI platform risks normalizing abuse at scale, the coalition wrote.
Posted: 2025-08-19 18:53:32