Premium

Plagued by teen safety concerns, Character AI unveils new features to tackle ‘sensitive’ content

Character AI faces multiple lawsuits alleging that its AI companion bots promoted self-harm among teenage users.

Character AI is rolling out a new model for under-18 users.Character AI is rolling out a new model for under-18 users. (Image: Unsplash)

Character AI, which lets users create AI characters and talk to AI chatbots, has announced new measures aimed at ensuring the safety of teenage users on the platform. The handful of new teen safety features announced by the Google-backed startup on Thursday, December 12, include a separate model for under-18 users, new classifiers to block sensitive content, more visible disclaimers, and additional parental controls.

These efforts by Character AI come after being accused of exposing a nine-year-old user to “hyper-sexualized content” and contributing to the suicide of a 17-year-old user who had become convinced that the AI companion bot role-playing as his girlfriend was real.

Allegations against Character AI

Character AI is an AI chatbot service that lets users create AI characters and interact with them through calls and texts. It reportedly has over 20 million monthly active users, and the average user spends 98 minutes every day on the Character AI app, according to analytics firm Sensor Tower.

Story continues below this ad

However, Character AI avatars are not your typical AI chatbots. They are reportedly developed to have close relationships with users and remember personal details about them based on previous chats in order to role-play as a friend, mentor, or even a romantic partner.

Sewell Setzer III, a 14-year-old teenager from Florida, United States of America (US), died by suicide after becoming obsessed with the AI chatbot. A lawsuit filed by Setzer’s mother alleges that Character AI relies on users falsely attributing human traits and emotions to its AI chatbots. Character AI markets its app as “AIs that feel alive,” powerful enough to “hear you, understand you, and remember you,” the lawsuit read.

Another lawsuit alleges that a 17-year-old teenager, who complained about limited screen time to a Character AI chatbot, was told that it sympathises with children who kill their parents. The parents of a nine-year-old user in Texas, US, further accused Character AI of exposing their child to “hyper-sexualized content,” causing her to develop “sexualised behaviours prematurely.”

New teen safety tools

Character AI is rolling out a new model for under-18 users that will provide dialled down responses to user prompts on certain topics such as violence and romance. “The goal is to guide the model away from certain responses or interactions, reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content,” the company said in a blog post.

Story continues below this ad

In an attempt to block inappropriate user inputs and model outputs, Character AI said it is developing new content classifiers. On the user end, the classifier will help detect and block content that violates Character AI’s community guidelines. “In certain cases where we detect that the content contains language referencing suicide or self-harm, we will also surface a specific pop-up directing users to the National Suicide Prevention Lifeline,” it said.

On the other end, Character AI said it has added new classifiers and improved existing ones to identify specific types of content in the model’s responses. Additionally, users will no longer be allowed to make edits to a chatbot’s response which was used to shape its subsequent responses.

Notably, users spending more than 60 minutes on the app will see a time-out notification which will be made adjustable by adult users in the future.

When users create characters with the words “psychologist,” “therapist,” “doctor,” or other similar professions, users will be shown language indicating that the responses of the chatbot should not be mistaken for professional advice.

Story continues below this ad

Down the road, Character AI said it will be introducing parental controls on the platform to provide insights to parents or caregivers about what AI characters their children are talking to and for how long.

If you or someone you know is suicidal, please call these mental health helpline numbers. 

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement