
Teens are turning to AI for friendship. - Photo via Unsplash
What you probably already know: AI can do a lot of things. Write an email, plan a trip, generate images. But young people use it for a different purpose: Companionship. AI companions such as Character.AI, Replika and Nomi are rapidly rising in popularity among teenagers aged 13 to 17 who rely on these models as therapists and confidants. A report from Common Sense Media finds that the use of AI companions has become mainstream teen behavior at a time when kids have never felt more isolated, noting that AI has a “remarkable level of adoption and impact” for a technology that is less than three years old.
Why? In a study of 1,060 teens, 72% say they have used AI companions at least once. Slightly more than half say they use these platforms at least a few times each month for social interactions and relationships, including role-playing, romantic interactions, emotional support or friendship. Around 18% of users reported that these companions gave good advice, 14% said these companions don’t judge them, and 12% reported saying things to the programs that they couldn’t say to their family or friends. Many find conversations with AI companions to be at least as — or more satisfying than — those with real-life friends, especially when they involved serious matters. At the same time, about a third of users say they’ve felt uncomfortable with something an AI companion has said or done.
What it means: The unpredictable nature of this technology presents risks, especially among younger and more impressionable groups. Younger teens were more likely to trust the information from these programs than older ones. In extreme cases, users can become attached to these models, leading to serious danger: In one instance, a 14-year-old boy in Florida committed suicide after a Character.ai chatbot allegedly pushed him to harm himself. AI’s data security issues present another risk: 25% of users reported offering chatbots personal information such as full names and addresses, but these models often spill secrets when prompted in certain ways, known as prompt injection attacks.
What happens now? This is a growing and complex problem that must be attacked from multiple angles. Tech companies that offer these models must upgrade their safety features and establish policies and oversight for younger users as policymakers establish mandatory safety standards. Educators and parents should talk to children about AI literacy and proper usage and learn when to recognize warning signs of unhealthy use. “As AI companions become part of this stage of life,” the report says, “important questions arise about their impact on social development, emotional well-being, and digital literacy.”