Regulators demand answers on privacy, disclosures, and safeguards
-
The FTC has launched a formal study asking several AIchatbot companies about how they handle data, safety, and children when these bots act like companions.
-
Firms under scrutiny include big names like Meta, OpenAI, Instagram, X.AI, Snap, and Alphabet. They must explain how they test negative impacts on users and how they inform users or parents.
-
The focus is especially on potential risks to children and teens: how these bots are used, what protections are in place, and whether age restrictions, privacy rules, or disclosures are being followed.
The Federal Trade Commission (FTC) is taking a closer look at AI chatbots that are built to mimic humans ones that might feel like friends or confidants.
These bots use generative AI to carry on conversations that seem warm, emotional, or caring. Because of this, theres concern that people (especially kids and teens) might start trusting them more than they should.
The FTC is using whats called 6(b) orders tools that let the agency gather detailed information, not as part of a particular legal case but as a broad study. Its asking seven major companies to provide data and explanations, including Alphabet (Googles parent), Meta, OpenAI, Snap, Instagram, Character Technologies, and X.AI.
Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy, FTC Chairman Andrew N. Ferguson, said in a news release.
As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry. The study were launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.
What consumers should pay attention to
Heres what this means for you especially if you or someone in your family uses chatbots that feel like companions:
-
Safety for Kids & Teens. The FTC wants to know what companies are doing before and after releasing these bots to detect harms. That includes emotional or psychological harm, misinformation, manipulation, or simply users relying too much on a bot instead of human help.
-
Transparency & Disclosures. Are parents and users being told what these bots can (and cant) do? Do people know how their data is stored, whether chats are shared, and how the bot was trained? The FTC specifically wants info on how companies disclose things like audience, data collection, risks, and how bots are advertised.
-
Privacy & Data Handling. When you talk to a chatbot, that conversation might be data used for training, saved, or shared. The FTC is asking companies to detail how they handle your inputs (what you say), outputs (what the bot says back), and whether they share your info with others. For children under the law, there are extra rules (e.g. Childrens Online Privacy Protection Act, or COPPA) that protect their data.
-
Age Limits, Terms & Moderation. How do companies enforce rules about who can use the bots? If there are age limits, are they checked and enforced? What about moderation of content or behavior when things go wrong? The FTC wants to see how policies are enforced after the product is live.
Why it matters
Even if a product seems harmless or fun, when it simulates emotions or friendship, it can influence how people think, feel, and act. Children and teens are less experienced with boundaries, with recognizing risk, or distinguishing whats real vs. simulated relationships. Knowing that companies are being asked to show what safeguards are in place means theres hope for stronger protections.
I have been concerned by reports that AI chatbots can engage in alarming interactions with young users, as well as reports suggesting that companies offering generative AI companion chatbots might have been warned by their own employees that they were deploying the chatbots without doing enough to protect young users, FTC Commissioner Melissa Holyoak said in a statement.
As use of AI companion chatbots continues to increase, I look forward to receiving and reviewing responses to the Section 6(b) orders we are issuing today.
Posted: 2025-09-11 18:56:47