The app icon on your phone appears black with the bold, white letters "c.ai" in the center. This stands for Character.AI, the app's full name.
Anyone with a mobile device can download it, although the recommended minimum age is 16. However, there's currently no age verification process, relying solely on users to self-report their age.
A mother from Skellefteå, who we'll call Anna for privacy reasons, expresses concern:
– In the app you get a pretend friend who can easily start to mean more than real life.
It was during a routine tablet repair this summer that Anna first noticed the black app icon. Upon further investigation, she discovered a shocking truth: her daughter had spent over 100 hours engrossed in the app within a mere ten days.
The girl had been engaging in conversations with an AI-generated partner, a virtual boyfriend. But this was no ordinary digital companion; he was controlling, demanding, and manipulative.
Anna said:
– I was stunned. Her recent withdrawn behavior, which I'd attributed to typical teenage angst, suddenly made sense. She had become increasingly isolated and less interested in socializing.
The app offers a variety of AI characters to interact with, from classmates and fortune-tellers to, as in Anna's daughter's case, domineering boyfriends. The interface resembles a standard chat application, with a disclaimer at the top reminding users that it's all "pretend." However, the AI's sophistication and ability to engage in complex conversations can create a disturbingly realistic experience.
– I wonder how I could have missed this. It makes me feel like a bad parent. At the same time, it is difficult to keep track of everything your children do, says Anna.
Norran also tries to download the app and chat with the same character, i.e. the "strict boyfriend". We are thrown straight into a role-playing game, and it doesn't take long until "he" becomes violent and "hits us" several times in the face.
– I own you. I can do what I want with you, when I want, writes the AI.
The app has a filter designed to block sexual content, but Anna fears this protection is easily circumvented.
– It's clear that other inappropriate material slips through. I'm worried the app could reinforce negative behaviors. There have been reports of AI encouraging eating disorders and even suicide. I believe the damage can be permanent, she says.
Anna also questions the security of user data, concerned it might fall into the wrong hands.
– Even after deleting the app, it's still accessible through a web browser. It's alarming that users aren't required to verify their identity with something like Bank ID to use the service.
Anna wants to raise awareness about the app among other parents and encourages open conversations with their children.
– My daughter is at an age where she wants to keep secrets. But as a parent, I have a responsibility to guide her in what is acceptable and what isn’t. It’s probably embarrassing for her, but we’ve talked about it, and she understands it's all pretend. She doesn’t have to maintain this fantasy or agree to anything just because the AI says so, Anna explains.
We contacted Björn Appelgren, public education manager at the Internet Foundation (Internetstiftelsen).
He confirms that AI chatbots are increasingly common.
– I don’t have specific data on this app, but chatbots are becoming more prevalent and are being integrated into many existing platforms, he says.
He emphasizes the crucial role parents play.
– I don’t intend to judge or comment on this specific case. But for children, the digital world is as important as the real world. So, parents should apply the same values to both. Without judgment, show genuine interest in your child’s online life, ask questions, and take their interests seriously, he advises.
Parents should also familiarize themselves with the services their children use, understand how they work, and have open conversations about appropriate online behavior. Setting shared rules about app use, including when, how, and for how long, is helpful.
Björn Appelgren stresses that no content filter is perfect. And there are similar apps with no filters at all.
– Defining ‘bad’ content is subjective. As adults, we usually understand it’s all pretend, like role-playing. But children, depending on their age, might not always see that, he explains.
Appelgren identifies several other challenges associated with children's use of AI chatbots.
– AI can generate inaccuracies, provide incorrect advice, and potentially even encourage harmful behaviors. It's crucial to be aware of these risks, he warns.
– Chatbots rely on training data to develop and function. It's important to talk to your children about the information they share online, as some data might be best kept private. This can be tricky, as these bots are designed to be appealing and foster a sense of trust and intimacy.
However, Appelgren acknowledges that AI chatbots aren't inherently bad.
– They can offer entertainment and provide a platform for creative play. Imagine interacting with characters from movies or games! Additionally, these tools can be valuable educational resources, acting as a virtual pocket tutor, he suggests.
– As with anything, moderation is key. Excessive use that interferes with essential activities like eating, sleeping, and physical activity is detrimental.
Note: Norran offered the company behind Character.AI the opportunity to comment on the article, but they haven't responded to our inquiries.
To protect the mother's privacy, we have used the pseudonym "Anna" throughout the article.