Experts share how to spot risks, set limits, and guide teens toward healthier tech use
-
Experts say chatbots can support learning and creativity, but relying on them for emotional support or advice can pose risks for teens mental health and development.
-
Warning signs of problematic AI use include emotional reliance on chatbots, distress when access is limited, and pulling away from real-world relationships.
-
Clear limits, open conversations, and encouraging offline relationships help teens build a healthier relationship with AIwithout fear or overreaction.
For todays teens, turning to an AI chatbot for answers or even comfort can feel as natural as texting a friend. But for parents, the rise of AI as a source of advice and companionship brings understandable concerns about emotional development, mental health, and online safety.
While AI tools can offer convenience and support, experts warn that unchecked use may sometimes do more harm than good. Thats especially true if teens begin relying on chatbots instead of trusted adults, friends, or mental health professionals.
To help parents make sense of it all, ConsumerAffairs spoke with Adam Chekroud, co-founder and president of Spring Health, and Alvin McLean, dean of the JFK School of Psychology and Social Sciences at National University. They share expert-backed guidance on how parents can stay involved, recognize red flags, and help teens build a healthier relationship with AI without fear or overreaction.
Risks of asking AI for advice
The experts shared some of the top risks associated with teens using AI for advice or companionship:
-
Mistaking AI-generated responses for legitimate clinical guidance
-
Becoming overly reliant on chatbots for validation or support using them as a substitute for human connection or professional care
-
Eroding trust in professional mental health care
Because chatbots are always available and nonjudgmental, they can unintentionally reinforce avoidance of difficult emotions or conversations, McLean explained. There is also the risk of misinformation, oversimplified advice, or responses that lack nuance around mental health, relationships, or self harm, which can be especially harmful for adolescents who are still developing critical thinking skills and emotional regulation.
Confusing AI tools with therapists also raises privacy concerns, increases the likelihood of missed clinical red flags, and can erode trust in professional mental health care over time, Chekroud continued.
Is there healthy AI use?
The short answer: yes.
Healthy use looks curious and functional, such as asking questions, brainstorming ideas, or using AI for schoolwork or creativity, McLean said.
Healthy exploration of AI occurs when teens use it to satisfy curiosity, support learning, or gather information and can question, explain, or fact-check the responses they receive, Chekroud said. In these cases, AI is one tool among many, not a primary source of guidance or emotional support.
Signs of problematic use
On the other hand, parents should know the signs of problematic AI use to look for:
-
Secrecy about AI use
-
Excessive time spent chatting
-
Emotional attachment to the chatbot
-
Distress when access is limited
-
Relying on AI for reassurance, advice, or connection they would typically seek from people
-
Increased isolation
-
Reduced interest in real-world relationships
-
Noticeable changes in mood or behavior
Warning signs can include withdrawal from friends or family, using the chatbot as a primary source of advice for personal or emotional issues, or expressing that the AI understands me better than people do, McLean said.
Setting boundaries around AI use
The experts agree that its important for parents to set boundaries and limits around their kids AI use, highlighting that AI is not a substitute for human connection.
Parents should set clear expectations around when, why, and how AI tools are used by their children, McLean said. This can include time limits, device-free periods, and guidelines that AI should not be used for mental health advice, relationship decisions, or crisis support.
Teens are still developing judgment and may interpret AI-generated responses as facts or truth, Chekroud said. Reinforcing that AI is a supplementnot a replacementfor human connection, trusted adults, or professional care helps reduce the risk of emotional reliance and misuse.
Life outside of chatbots
One of the biggest pieces of advice for parents: reinforce to kids that there is life outside of AI chatbots.
Encouraging extracurricular activities, shared family time, and in person social experiences helps reinforce that meaningful connection comes from human interaction, McLean said. Modeling balanced technology use and having regular conversations about emotions, stress, and decision-making also reduces the likelihood that teens will turn to AI as their primary outlet.
Another tip from Chekroud: normalize awkwardness.
Teens often turn to AI because it feels safer than being vulnerable with people, he said.Parents can reinforce that awkwardness is part of learning and growth, not a sign of failure.
Posted: 2026-01-20 22:14:16















