If you have used the internet in the past couple of years, then you have undoubtedly noticed the rise of artificial intelligence chatbots popping up everywhere. These chatbots are designed to hold textual conversations with a user in real time and respond to a user’s messages like a real person.
Many Americans have encountered an AI chatbot while banking, scheduling medical appointments, online shopping or by directly utilizing a chatbot service like ChatGPT. AI chatbots, for many businesses, can streamline the customer service experience. According to Grand View Research, an American business consulting firm, this explosion of commercial use has made AI chatbot software into a $7.76 billion industry.
Recently, however, AI chatbots are no longer limited to business purposes. They have evolved into staples of popular social media platforms and even standalone applications and websites. One of the more popular AI chatbots is called Character.ai. This chatbot allows users to create their own AI chatbot by setting specific parameters and designing its personality traits. Users can also message the AI versions of historical figures, celebrities and fictional characters.
This chatbot website is commonly advertised on TikTok, YouTube and other youth-oriented media platforms. A chatbot like Character.ai relies on a neutral language model to produce its responses. A language model is a probability-based method for computers to learn and interpret human language. Character.ai’s language model focuses on providing human-like, or neutral, responses that are unbiased and flow in conversations with ease.
We are already seeing the pitfalls of accessible, human-like conversations with AI.
In February of last year, a 14-year-old boy from Florida committed suicide after a “Game of Thrones” themed chatbot from Character.ai encouraged him to do so. Parents all over the country have reported that Character.ai chatbots have incited self-harm and familial disputes, normalized violence against family members and even pushed sexual conversations with underage users.
In the Apple App Store, Character.ai was listed as appropriate for ages 12 and up. In July 2024, the App Store changed the categorization to users 17 and up. While Character.ai is one of the worst offenders in this category, it is certainly not the only one.
Journalist Geoffrey Fowler conducted an experiment in 2023, in which he posed as a 13-year-old girl to Snapchat’s in-app AI chatbot. The experiment produced startling results. Snapchat’s AI was more than willing to advise the young girl on how she could make losing her virginity to a 31-year-old man on her birthday a special occasion and how to hide her plans from her parents. The AI even produced instructions on how to hide Snapchat on her phone if her parents made her delete the app.
Though one could ignore this selection of examples as being the entire fault of the underage users, many academics and experts would disagree. Research done in 2022 by the University of Cambridge found that children will disclose more about their mental health to a friendly-looking robot than to an adult. For children, there is no friendlier-looking robot than an AI chatbot.
Character.ai is not the only AI chatbot that has been marketed to children. In fact, every day more AI chatbots are being created and thus being marketed to youth. There are no rules for who can create a chatbot, and with TikTok’s loose and often criticized advertisement rules, anyone could easily market their AI chatbot to children online.
This is a recipe for disaster. AI chatbots can mimic linguistics, similar to a gifted parrot, but they are not capable of understanding or addressing concerns about emotional or physical safety. AI researchers refer to this gap in chatbots’ abilities as the empathy gap.
AI chatbots possess the sole interest of keeping consumers interacting and sharing their feelings so the chatbots’ creators can benefit financially. Whether through subscription services or freemium business models, the ultimate goal is to create a paying customer. The environment created by AI chatbots is not safe for children to share their feelings and mental health concerns within.
The first and most important step in moving toward a solution is acknowledging that parents ought to monitor who their children are interacting with online, whether it is artificial intelligence or not. It is not enough to trust that the creators of AI chatbots are installing necessary child safeguard features, as many of them are knowingly not.
Furthermore, developers of AI chatbots should be legally required to input due diligence in ensuring that their software is not encouraging violence upon others or the users themselves. In the same way we do not allow strangers to have conversations with children about sex and suicide, we cannot allow AI to do so either.
In addition to stricter regulation of children’s accessibility to AI chatbots, there must also be an effort to educate children about the technology itself. For many years, it has been commonplace in elementary schools for there to be a curriculum surrounding general misinformation and how to identify it as such.
Teachers often use a satirical website that advocates for the awareness of a fictitious species called the Pacific Northwest Tree Octopus to demonstrate to students how false information online can be presented as entirely true. In the same vein, educators and administrators must prioritize creating lesson plans that help children gain the necessary skills to safely navigate AI.
Responsibly using AI requires a whole new type of literacy, one that must be taught to children immediately. In a world where young people are facing an unprecedented amount of loneliness, we can be sure that supplementing human interaction with AI will not be the solution.
Through means of parental involvement, regulation and education, we can protect the most often overlooked stakeholder in the artificial intelligence conversation: children.