It is estimated that more than half of US teens regularly use companion chatbots powered by large language models and genetic artificial intelligence (AI) technology. The programs, such as Character.AI, Replika and Kindroid, are meant to provide companionship, according to the companies that make them. But a recent study from Drexel University shows that teens worry that these attachments are becoming unhealthy and affecting their offline lives.
The study, which will be presented at the Association of Computing Machinery’s conference on Human Factors in Computing in April, looked at a sample of more than 300 Reddit posts from users, self-identified as ages 13 to 17, who had specifically posted about their dependence and overdependence on Character.AI. He found that in many cases, teens began using technology for emotional and psychological support or entertainment, but their use developed into dependence, even into addiction-related patterns. Some reported that excessive use disrupted their sleep, caused academic struggles and strained relationships.
“This study provides one of the first adolescent accounts of overreliance on AI companions,” said Afsaneh Razi, PhD, assistant professor in Drexel’s College of Computing & Informatics, whose ETHOS lab, which studies how people’s interactions with computer and AI systems affect their social behavior, well-being and safety. “It highlights how these interactions impact the lives of young users and introduces a framework for designing chatbots that promote healthy interactions.”
About a quarter of the posts suggested that teens were using Character.AI for some kind of emotional or psychological support, ranging from coping with anxiety to loneliness and isolation or seeking advice for mental health struggles. Just over 5% reported using it for brainstorming, creative activities, or entertainment.
And while the posts seem to indicate that these interactions started out as harmless or even helpful, they grew into a stronger attachment that became as hard to break as an addiction, according to the researchers.
By mapping teens’ experiences with the known components of behavioral addiction, we were able to see clear patterns such as conflict, withdrawal, and relapse emerge in their posts, suggesting that this is more than simple or enthusiastic use. Many teenagers described starting out with something that felt useful or harmless, but over time it became something they struggled to get away from, even when they wanted to.”
Matt Namvarpour, PhD student in the Department of Informatics and ETHOS Lab, first author of the research
Within the 318 publications they analyzed, the researchers found evidence for all six components associated with behavioral addiction:
- Conflict –– competitive desires to continue interacting with the chatbot while feeling bad about overuse.
- Projection – you feel a deeper emotional attachment with bots instead of humans.
- Withdrawal – feel sad, anxious or incomplete when not interacting with robots.
- Tolerance – developing a pattern of escalating usage and needing to keep using bots more to feel satisfied or emotionally grounded.
- Relapse – trying to quit only to go back to using the bot days or weeks later.
- Mood modification – they turn to robots in moments of stress or loneliness to improve their mood or find temporary relief.
“What makes it particularly difficult is that chatbots are interactive and emotionally responsive, so the experience can feel more like a relationship than a tool,” Namvarpour said. “Because of this, withdrawal isn’t just about breaking a habit, it can feel like you’re moving away from something meaningful, which makes overdependence more difficult to recognize and deal with.”
While addiction to technology such as video games has been studied and recognized as a psychological condition, the unique interactivity of AI chatbots makes users particularly prone to forming problematic attachments, according to researchers. And because of this, they suggest that special care should be taken in their design to protect users.
“Personalization, multimodality, and memory differentiate AI companions from previous technologies and make overdependence more difficult to disengage from authentic relationships,” the researchers wrote. “This highlights the need for further research into the unique characteristics of these relationships and how to address the challenges specific to companion chatbots.”
The team offered a design framework to help address this concern. It focuses on understanding the needs of chatbot users, how and why they may create attachments, and how bots can be trained to limit them while being respectful and supportive. They also recommend that programs provide an easy and clean exit for users.
“It is important for designers to ensure that chatbots offer guidance that helps users build confidence in their abilities to form relationships offline as a healthy way to find emotional support, without using cues that may lead them to anthropomorphize the technology and develop attachments to it,” Razi said. “Our framework also invites designers to provide a variety of on-ramps for users to easily disengage from the program on their own terms and without the sense of abruptness or finality.”
Including features like usage tracking, emotional check-in prompts and personalized usage limits could also be effective ways to carefully limit usage, the researchers suggested. They also recommended including input from users and mental health professionals in the design process.
“Designers now have a responsibility to build systems with empathy, nuance, and attention to detail, not only to protect adolescents from harm, but also to help them foster resilience, growth, and greater fulfillment in their lives,” they concluded.
To expand on this research, the team pointed to studying larger communities of users from a wider demographic, possibly through surveys or interviews, as well as users of other chatbots and messaging platforms outside of Reddit.
